How AI can help educators see if their learning materials work

Companies like Amazon and Facebook have systems in place that continuously respond to how users interact with their apps to streamline the user experience. What if educators could use the same adaptive experimentation strategy to regularly improve their teaching materials?

That’s the question asked by a group of researchers who have developed a free tool called the Adaptive Experimentation Accelerator. The system, which leverages artificial intelligence, recently won first place in the annual XPrize Digital Learning Challenge, which boasts a $1 million grant divided among the winners.

At Amazon and Facebook, they are rapidly adjusting conditions and changing what their viewers see to try to quickly better understand which small changes are most effective, and then deliver more of those changes to the public, says Norman Bier, director of the Open Learning Initiative at Carnegie Mellon University who worked on the project. When you think about it in an educational context, it really opens up the opportunity to give more students the kinds of things that best support their learning.

Bier and others involved in the project say they are testing the approach in a variety of educational settings, including public and private primary and secondary schools, community colleges and four-year colleges.

EdSurge sat down with Bier and another project researcher, Steven Moore, a doctoral candidate at the Carnegie Mellons Human-Computer Interaction Institute, to learn more about their bid to win the Education XPrize and what they see as the challenges and opportunities for leveraging AI in the classroom.

The discussion took place at the recent ISTE Live conference in Philadelphia in front of a live audience. (EdSurge is an independent newsroom that shares a parent organization with ISTE. Learn more about EdSurge’s ethics and policies here and about its supporters here.)

Listen to the episode on Apple Podcasts, Overcast, Spotify or wherever you get your podcasts, or use the player on this page. Or read a partial transcript below, slightly edited for clarity.

EdSurge: The app you developed helps teachers test their teaching materials to see if they are effective. What’s new in your approach?

Norman beer: If you think about standard A/B testing [for testing webpages], usually work outside of middle school. If we average everything, we’ll have student populations for whom the intervention that works for all doesn’t work for them individually. One of the real benefits of adaptive experimentation is that we can begin to identify, who are these subgroups of students? What are the specific types of interventions that are best for them? them the intervention that is best for them. So there’s a real opportunity, we think, to better serve students and really approach the concept of experimentation more fairly.

I understand one aspect of this is something called student sourcing. What is that?

Steven Moore: The concept of learner sourcing is similar to crowdsourcing, where a large number of people participate. Think of the game show Who Wants to be a Millionaire? when the contestants question the audience. They ask the audience, Hey, there are four options here. I don’t know which, what should I choose? And the audience says, Oh, go with choice A. This is an example of crowdsourcing and wisdom of the crowd. All these great minds come together to try and get a solution.

So student sourcing is a take on that, where we actually take all this data from students in courses in these huge online open courses and we take their data and actually have it do something for us that we can then put back into the course.

One particular example is getting students who are taking, say, an online chemistry class to create a multiple-choice question for us. And so if you have a class with 5,000 students and they all choose to create a multiple choice question, you now have 5,000 new multiple choice questions for that chemistry class.

But you might be thinking, how is the quality of those? And honestly, it can vary a lot. But with this whole wave of ChatGPT and all these great language models and natural language processing, we are now able to process these 5,000 questions and improve them and find out which ones are the best that we can actually take and use in our course instead of putting them back blindly in the course.

Coffin: We ask students to write these questions not because we are looking for free labor but because we think it will actually be useful for them as they develop their knowledge. Also, the type of questions and feedback they are giving us is helping us better improve the course materials. We’ve gotten the feeling from many, many researches that a novice perspective is really, really important, particularly in these lower-level courses. And so quite implicit in this approach is the idea that we are taking advantage of that novice perspective that students are bringing, and that we all lose as we gain experience.

How much does AI play a role in your approach?

Moore: In our XPrize work, we definitely had some algorithms that feed the backend that takes all the student data and basically does an analysis to say, Hey, should we give this talk to student X? So AI was definitely a big part of that.

What is a scenario of how a teacher in a classroom would use your tool?

Coffin: The Open Learning Initiative has a course in statistics. It’s an adaptive course, think of it as an interactive high-tech textbook. And so we have thousands of students at a university in Georgia using this statistics course instead of a textbook. Students read, watch videos, but above all they launch themselves, answer questions and receive targeted feedback. And so in that environment, we’re able to introduce these student sourcing questions as well as some approaches to try to motivate students to write their own questions.

Moore: I have a good example from one of our pilot tests for the project. We wanted to see how to engage students in optional activities. We have all these great activities in this OLI system and we want students to mess around with extra stats and whatnot, but nobody really wants that. And so we want to say, Hey, if we can provide a motivational message or something like, Hey, keep going, say five more problems and you know, you’ll learn more, you’ll do better on these exams and tests. How can we tailor these motivational messages to get students to participate in these optional activities, whether it’s researching students or just answering some multiple-choice questions?

And for this XPRIZE competition in our pilot test, we had some motivational quotes. But one of them involved a meme because we thought maybe some college students in this particular course would like it. So we put in a picture of a capybara, it’s kind of like a big hamster or guinea pig sitting at a computer with headphones and glasses on, no text. Let’s say, let’s put him in and see if he gets the students to do it. And for about five different conditions, the image of the capybara alone with headphones on the computer led to more students participating in the next activities. Maybe he made them laugh, who knows the exact reason. But out of all these motivational messages, this one had the greatest effect in that particular class.

There is a lot of excitement and concern about ChatGPT and the latest AI tools in education. Where are you both on that continuum?

Moore: I definitely play both sides, where I see there is a lot of fantastic progress going on, but you should definitely be super hesitant. I would argue that you always need human eyes on whatever the output of whatever generative AI you are using. Never blindly trust what you are given, always put a human eye on it.

I’d also throw out that the plagiarism detectors for ChatGPT are terrible right now. Don’t use those, please. They are not right [because of false positives].

Coffin: This notion of the human in the loop is really a hallmark of the work we do at CMU, and we’ve been thinking strategically about how to keep that human in the loop. And that’s a bit at odds with some of the current hype. There are people who are rushing to say: What we really need is to build a magical tutor that can provide direct access to all of our students who can ask questions. There are many problems with this. We’re all familiar with technology’s tendency to hallucinate, which is compounded by the fact that a lot of research on learning tells us we like things that confirm our misconceptions. Our students are the least likely to challenge this bot if it tells them things they already believe.

So we’ve been trying to think about what are the deeper applications of this and what are the ways that we can use those applications while keeping a human being around? And there are a lot of things we can do. There are aspects of content development for things like adaptive systems that humans, while very good at it, hate to do. As someone who builds course materials, authors in my faculty hate writing questions with good feedback. It’s just not something they want to spend their time doing. So providing ways that these tools can start providing them with early drafts that are still reviewed is something we’re excited about.

Listen to the full conversation on this week’s EdSurge podcast.

#educators #learning #materials #work
Image Source :

Leave a Comment