This is a past event.
Applied Cognitive Science and Human Factors (ACSHF) forum
Shane Mueller
Associate Professor of Psychology, Cognitive and Learning Sciences
What makes for an explanation of "black box" AI systems such as Deep Nets? We reviewed the pertinent literatures on explanation and derived key ideas from the various pertinent disciplines. This set the stage for our own empirical inquiries, which include conceptual cognitive modeling, the analysis of a corpus of cases of "naturalistic explanation" of computational systems, computational cognitive modeling and the development of measures for performance evaluation.
The purpose of our work is to contribute to the program of research on "Explainable AI." In this report, we focus on our initial synthetic modeling activities and the development of measures for the evaluation of explainability in human-machine work systems.
0 people added
No recent activity