Research

My work explores how to bring AI and cognitive-science insights into real-world education and health settings. At its core, I believe artificial intelligence should help people learn, not just automate tasks, by making the invisible processes of learning and decision-making visible, measurable, and actionable. I build systems that are clinically and pedagogically grounded, designed for real workflows, and evaluated with criteria that matter: reliability, interpretability, and impact on outcomes.

Current Projects

Exercise Recommendation and Student Learning Modeling
Drawing from neuroscience, cognitive psychology, and modern machine learning, I am developing systems that help learners choose high-impact practice activities and track how their understanding evolves over time. This work combines insights from memory, attention, and metacognition with computational student modeling to support adaptive practice, durable retention, and conceptual integration. A central goal is to make learning dynamics legible to both students and educators—turning abstract learning science into concrete feedback, targeted practice, and interpretable trajectories of growth.

Automated Short-Answer Grading and Learning-Outcomes Analysis
Using large language models and natural language processing, I build tools that analyze short-answer responses, assess conceptual understanding, and support evidence-based instruction at scale. These systems are grounded in explicit rubrics, structured criteria, and transparency checks, with an emphasis on reliability across prompts, graders, and contexts. The focus is not automation for its own sake, but closing the loop between learning objectives, assessment evidence, and feedback that is actionable, fair, and pedagogically aligned. This work was recently published in the New England Journal of Medicine: AI

Flipped-Classroom Chatbot Support
I am developing a conversational AI assistant that supports flipped-classroom environments by engaging students before class, prompting reflection on readings, and surfacing areas of uncertainty. Instructors receive structured summaries of themes, misconceptions, and points of confusion, enabling them to shape class time toward synthesis, reasoning, and higher-order application. The aim is to make preparation more interactive and inclusive while giving educators visibility into how student thinking is forming—without replacing the human judgment that good teaching depends on.

Faculty Support for Learning-Focused Syllabus Design
Faculty play a central role in shaping how AI enters the classroom. I am building a toolkit that helps instructors design courses, assessments, and feedback strategies grounded in cognitive science and clear, data-informed rubrics. This initiative focuses on responsible and accessible AI integration: supporting instructors who want to use AI to strengthen alignment between goals, activities, and assessment, without requiring them to become technical specialists. The objective is to lower the barrier to rigorous course design while preserving nuance, autonomy, and accountability.

Health and Imaging AI

Alongside my educational work, I continue research in biomedical imaging and clinical risk modeling. My doctoral research in Medical Biophysics at the University of Toronto focused on AI for breast MRI interpretation and risk prediction, integrating imaging and text data to capture subtle markers of disease and quantify longitudinal change. I have also worked on explainable segmentation, language understanding in radiology reports, and statistical modeling to characterize variability in imaging biomarkers. These experiences continue to shape my approach to educational AI—reinforcing that transparency, robustness, and fairness are essential whether the data represent patients or students, and that evaluation must be tied to real decisions and real consequences.

Overarching Themes

  1. Transparency and Explainability
    Whether in clinical research or education, tools must make sense to their users. I design models where reasoning and results can be examined, questioned, and trusted, with evaluation that prioritizes interpretability and reliability rather than performance alone.

  2. Learner-Centered Design
    I focus on the cognitive journey of the learner: what they know, what they are ready to explore next, and how understanding changes through feedback, practice, and reflection. My goal is AI that supports learning as a process over time, not a single snapshot.

  3. Scalable and Responsible Infrastructure
    I aim to build systems that scale thoughtfully and transparently. AI should amplify human expertise, not flatten it. The goal is infrastructure that supports growth, curiosity, and accountability—for students, educators, and clinical stakeholders alike.


© 2026 Grey Kuling. Built with Just the Docs.