Skip to main content
August 30, 2023

NLP and Education — Beyond Right or Wrong: Leveraging Language Models to Enhance the Learning Process

About This Video

Recent advances in natural language processing have stirred excitement about large language models (LLMs) used to provide equitable access to high-quality education. Current education paradigms have a strong focus on the product of learning—e.g., whether the student gets their answer correct. My research challenges those paradigms by leveraging LLMs to focus on the process—e.g., how are students arriving at their answer? I’m interested in using LLMs to understand and support both how teachers teach and how students learn.

In this talk, I will focus on understanding and supporting teachers’ remediation process, i.e., how teachers respond to student mistakes. I will discuss one recent submission which explores the potential of LLMs to assist math tutors in remediating student mistakes. Our work presents ReMath, a benchmark co-developed with experienced math teachers, which provides a comprehensive deconstruction of their thought process for effective remediation. This benchmark encompasses three crucial tasks: (1) inferring the type of student error, (2) determining the appropriate strategy and intention to address the error, and (3) generating a response that incorporates this information. We evaluate the performance of state-of-the-art instruct-tuned and dialog models on ReMath. Our findings suggest that although models consistently improve upon original tutor responses, we cannot rely on models alone to remediate mistakes. Providing models with the error type (e.g., the student is guessing) and strategy (e.g., simplify the problem) leads to a 75% improvement in the response quality over models without that information. Nonetheless, despite the improvement, the quality of the best model’s responses still falls short of experienced math teachers. Our work sheds light on the potential and current limitations of using LLMs to provide high-quality learning experiences for both tutors and students at scale.

In This Video
PhD Student, Stanford University

Rose E. Wang is a fourth year PhD student at Stanford University, advised by Dorottya (Dora) Demszky and Noah Goodman. She works on natural language processing (NLP) and education applications, and is gratefully funded by the NSF Graduate Research Fellowship. She completed her undergraduate studies at MIT, where she worked with Professor Joshua Tenenbaum, Professor Jonathan How, the Google Brain team (student researcher) and Google Brain Robotics team (internship). She has received several awards for her research, including best paper award in computational modeling at CogSci 2020, best paper award at NeurIPS 2020 Cooperative AI workshop, and an oral presentation at ICLR 2022 (<1.6%).