Explaining Explanation for 'Explainable' AI

Applied Cognitive Science and Human Factors (ACSHF) forum

Shane Mueller
Associate Professor of Psychology, Cognitive and Learning Sciences

What makes for an explanation of "black box" AI systems such as Deep Nets? We reviewed the pertinent literatures on explanation and derived key ideas from the various pertinent disciplines. This set the stage for our own empirical inquiries, which include conceptual cognitive modeling, the analysis of a corpus of cases of "naturalistic explanation" of computational systems, computational cognitive modeling and the development of measures for performance evaluation.

The purpose of our work is to contribute to the program of research on "Explainable AI." In this report, we focus on our initial synthetic modeling activities and the development of measures for the evaluation of explainability in human-machine work systems.

Monday, September 17, 2018 at 2:00 pm to 3:00 pm

Harold Meese Center, 109
1400 Townsend Drive, Houghton, MI 49931

Event Type

Academics, Lectures/Seminars

Target Audience

General Public



Cognitive and Learning Sciences
Host ?

Applied Cognitive Science and Human Factors


Recent Activity