Strand 2
Strand 2: Socio-technical foundations of trustworthy AI

Team members set up the Jigsaw Interactive Agent (JIA) Wizard of Oz Studies and teacher interviews in the NSF iSAT Lab.
Our goal is to improve student-AI teaming—and ultimately student learning and engagement—by designing trustworthy AI. Trustworthy AI refers to AI partners and learning environments built around reliability, security, transparency, safety, and privacy. We will use mixed-methods research to refine key theories and measures for trustworthy AI in K-12 settings, create user-centered designs and design guidelines, and design, deploy, and study trustworthy AI partners. We will also lead work on novel “under-the-hood” environments that help students understand and experiment with core AI concepts, including those related to trustworthiness.
Our guiding research question is: “What socio-technical approaches are needed to appropriately calibrate trust in AI during small group collaborative learning in K-12 classrooms?” Building appropriate trust to support uptake and effective use of AI tools is particularly critical when supporting collaborative learning, which involves complex knowledge sharing and negotiations among learners and teachers and creating psychologically safe learning conditions at multiple levels (individual, small group, whole class) to promote engaged participation by all students. Our work is organized along two themes: Theme 1: Novel Frameworks and Measures to Study Trustworthy Student AI Teaming and Theme 2: Under-the-hood Designs to Calibrate Trustworthy Student-AI Teaming.
The connections between privacy, fairness, safety, and trustworthy AI are underexplored in classrooms, requiring a re-envisioning of what it takes to design trustworthy AI in K-12 schools. Our goal is to build on extensive research on ‘trust in AI’ in adult populations to re-envision how to design trustworthy AI in K-12 classrooms.
Our goal is to study novel “under the hood” AI learning environments for improving students’ understanding of the inner workings of iSAT’s AI Partners. We run participatory studies with students and teachers to investigate cognitive, social, and technical factors that shape trust in interactions between students, teachers, and AI. This can ultimately provide students the agency to accept or contest AI inferences, and adapt AI models to their context, with appropriate safeguards in place.

