This past spring, Stanford University computer scientists unveiled their pandemic brainchild, Code In Place, a project where 1,000 volunteer teachers taught 10,000 students across the globe the content of an introductory Stanford computer science course.

While the instructors could share their knowledge with hundreds, even thousands, of students at a time during lectures, when it came to homework, large-scale and high-quality feedback on student assignments seemed like an insurmountable task. Chris Piech, assistant professor of computer science and co-creator of Code In Place said:

“It was a free class anyone in the world could take, and we got a whole bunch of humans to help us teach it. But the one thing we couldn’t really do is scale the feedback. We can scale instruction. We can scale content. But we couldn’t really scale feedback.”

To solve this problem, Piech worked with Chelsea Finn, assistant professor of computer science and of electrical engineering, and PhD students Mike Wu and Alan Cheng to develop and test a first-of-its-kind artificial intelligence teaching tool capable of assisting educators in grading and providing meaningful, constructive feedback for a high volume of student assignments.

Their innovative tool, which is detailed in a Stanford AI Lab blogpost, exceeded their expectations.

Teaching the AI tool

In education, it can be difficult to get lots of data for a single problem, like hundreds of instructor comments on one homework question. Companies that market online coding courses are often similarly limited, and therefore rely on multiple-choice questions or generic error messages when reviewing students’ work. Finn said:

“This task is really hard for machine learning because you don’t have a ton of data. Assignments are changing all the time, and they’re open-ended, so we can’t just apply standard machine learning techniques.”

The answer to scaling up feedback was a unique method called meta-learning, by which a machine learning system can learn about many different problems with relatively small amounts of data.

“With a traditional machine learning tool for feedback, if an exam changed, you’d have to retrain it, but for meta-learning, the goal is to be able to do it for unseen problems, so you can generalize it to new exams and assignments as well,” said Wu, who has studied computer science education for over three years.

The group found it much easier to get a little bit of data, like 20 pieces of feedback, on a large variety of problems. Using data from previous iterations of Stanford computer science courses, they were able to achieve accuracy at or above human level on 15,000 student submissions; a task not possible just one year earlier, the researchers remarked.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
Editor @ DevStyleR