已收录 273503 条政策
 政策提纲
  • 暂无提纲
Evaluating machine-generated explanations: a “Scorecard” method for XAI measurement science
[摘要] IntroductionMany Explainable AI (XAI) systems provide explanations that are just clues or hints about the computational models-Such things as feature lists, decision trees, or saliency images. However, a user might want answers to deeper questions such as How does it work?, Why did it do that instead of something else? What things can it get wrong? How might XAI system developers evaluate existing XAI systems with regard to the depth of support they provide for the user's sensemaking? How might XAI system developers shape new XAI systems so as to support the user's sensemaking? What might be a useful conceptual terminology to assist developers in approaching this challenge?MethodBased on cognitive theory, a scale was developed reflecting depth of explanation, that is, the degree to which explanations support the user's sensemaking. The seven levels of this scale form the Explanation Scorecard.Results and discussionThe Scorecard was utilized in an analysis of recent literature, showing that many systems still present low-level explanations. The Scorecard can be used by developers to conceptualize how they might extend their machine-generated explanations to support the user in developing a mental model that instills appropriate trust and reliance. The article concludes with recommendations for how XAI systems can be improved with regard to the cognitive considerations, and recommendations regarding the manner in which results on the evaluation of XAI systems are reported.
[发布日期] 2023-05-09 [发布机构] 
[效力级别]  [学科分类] 
[关键词] explainable AI;sensemaking;self-explanation;explanation scale;mental model [时效性] 
   浏览次数:1      统一登录查看全文      激活码登录查看全文