AI is rapidly reshaping classrooms, but technical capabilities alone cannot guide adoption. My research focuses on how AI can support learning, equity, and institutional trust—rather than undermine them.
As a Handshake AI Fellow, I collaborate with peers across sectors to examine equitable uses of generative AI in workforce and education systems. My recent work includes an AERA proposal analyzing how AI-generated explanations influence student reasoning in history, civics, biology, and policy. This research explores not only accuracy, but interpretive effects—how students adopt, trust, or challenge AI suggestions.
Central questions guide this inquiry:
- How does AI affect students’ civic reasoning?
- How do schools preserve academic integrity and human judgment?
- What governance structures ensure AI is used responsibly?
The goal is not simply to integrate new tools, but to design systems that protect fairness, transparency, and student agency.
I envision an approach to AI that strengthens humanity rather than replaces it.

