Video: Psychologist Alison Cheng on making educational assessments more informative, fair, and efficient

Author: Todd Boruff

 

“It is really important for us to look into whether we're promoting fairness, we're promoting equity, while personalizing and automating everything.” 

— Ying (Alison) Cheng


Ying (Alison) Cheng is a professor of psychology, a fellow of the Institute for Educational Initiatives, and associate director of the Lucy Family Institute for Data and Society at the University of Notre Dame. Her research focuses psychological and educational measurement. More information can be found on her faculty page. 


Video Transcript

I direct the learning analytics and measurement and behavioral sciences lab. We focus on research in educational and psychological measurement. We use statistical methods, statistical models, to improve assessment in terms of efficiency and making it more informative and making it more fair.

To make it more informative, we not only focus on understanding how well students do, we also try to provide diagnostic feedback on what they know, what they don't know, what skills they've mastered, and what skills that they're still weak on. On the fairness side, we use statistical models to detect potential biases that might exist for certain subgroups in terms of their cultural background or their language background. To make the assessments more efficient, we use and we develop and build adaptive testing algorithms so that we can build tailored tests for each individual test taker. By doing that we can cut down the test length or testing time by almost half, but still maintain the measurement precision on students.

We also build platforms that actually integrate those advanced diagnostics or technology. So for example, our lab has developed a testing platform that uses the adaptive algorithms so that it can provide personalized assessment experience for each student. Because we build these platforms, we're not only able to look at performance metrics, we're also able to look at human computer interaction. For example, when students log in, how long they spend on each question, whether they seek the feedback, they pay close attention to the feedback that was provided to them from the platform. So we're painting a much richer picture of their whole learning processes instead of just the outcome.

It is really important for us to look into whether we're promoting fairness, we're promoting equity, while personalizing and automating everything. We can't take it for granted that personalization is always good. We can't take it for granted that automation is always going to be good. How are these algorithms helping students? Are these algorithms helping to narrow the achievement gaps instead of widening them? That's a key interest of mine for future research. I think Notre Dame offers a very unique opportunity to tackle the problem of fairness and equity given its focus on ethics and fairness with the catholic mission. Also I have worked a lot with different centers and institutes on campus with the same consistent mission and I collaborate a lot with statisticians and computer scientists on campus, so there's a very nice supporting network to conduct the type of research that I do.

You can also watch this video on YouTube.