Explaining Algorithmic Systems to End Users



This project explores the relationship between instructor and student understanding of the artificial intelligence (AI) algorithms that underlay their educational technology, and the impact of that algorithmic understanding on decision-making for learning. The research will involve studies with people to investigate how algorithmic understanding impacts system trust and decision-making for learning, as well as the development of "explainables" or brief, engaging interactive tutoring systems to provide algorithmic understanding to classroom stakeholders. These two thrusts will yield a framework for designers of algorithmically enhanced learning environments to determine what level of algorithmic understanding is necessary to achieve the goals of informed decision-making by users of their systems. The explainables developed by this project will be publicly accessible and usable by external projects, increasing algorithmic understanding for the initially intended stakeholders, but also for the general public. The main contributions of this work include a methodologically rigorous investigation of the knowledge components of algorithmic understanding for learning contexts that can be applied to model interpretability discussions in the wider machine learning community.

The research involves systematically identifying the concepts that qualify as "understanding" an AI algorithm, building brief interactive tutoring systems to target those concepts, and observing resultant changes in system trust and decision-making for learning contexts. It combines approaches from the learning sciences, human-computer interaction, ethics, and machine learning. Student researchers will perform cognitive task analyses to identify hierarchical models of expert comprehension of AI models, apply a user-centered design process to develop explainables to teach the varying levels of expert comprehension, and perform evaluation studies comparing various explainables' impact on algorithmic understanding, trust, and decision making. The results will add to ongoing discussions about ethical algorithmic transparency in the larger machine learning community, but also provide an actionable framework for developing a more AI-informed student and teacher body as well as lightweight explainables for appending to external algorithmically enhanced learning environments.

Share



Follow this website


You need to create an Owlstown account to follow this website.


Sign up

Already an Owlstown member?

Log in