Emily: I’ve worked in the higher ed market for over a decade, and one thing I’ve heard repeatedly from students is that they are overwhelmed by all the resources they’re juggling and the concepts they need to learn to get the grades they want.
When they study, they have 17 tabs open with lecture notes, the eTextbook, past homework, Google, YouTube, ChatGPT, etc., and it’s unclear which resources will be most helpful. I remember feeling overwhelmed by this when I was a student!
The other thing we know about students today is that they rarely sit down, crack open the eTextbook, and learn by reading linearly week after week. They use it as a reference to help them complete graded assignments and study for exams. And that’s okay! It’s just a different way of learning.
With the emergence of generative AI, our vision was to reimagine the eTextbook experience by infusing it with valuable study tools that we know students use and making it the trusted, go-to resource for students when they’re stuck or studying at 2:00 a.m.
What were some of your initial projections or goals set before launch? Has anything surprised you since bringing the AI study tool to market in MyLab, Mastering, and eTextbooks?
Chris: Really, just getting it out the door and live to customers felt like Sisyphus pushing a boulder up a hill at times!
Our herculean effort to launch was ambitious, targeting broad integration and substantial user engagement across the available content in our courseware. Yet, beyond numbers, my core aspiration was to witness our tool becoming a catalyst for learning—something that students and educators would not only use, but view as a transformative innovation in education.
Emily: Speed! One of our main goals was to deliver an AI-powered capability within the eTextbook thoughtfully and quickly, so we could learn more about how students use generative AI tools in an academic setting.
We also want to be able to make rapid improvements and smart decisions about what comes next. A mantra we used often during those early months of development was “maximize learning.”
Our other goals were to build something that students use repeatedly and that they find helpful in a real class setting, which (based on an in-product survey at the end of the semester) the majority did!
One interesting thing we learned was that there was a relationship between AI study tool usage and overall eTextbook engagement. Based on an analysis from our Efficacy & Learning team, AI tool users were over 3x more likely to remain or become what we would consider an efficient/active eTextbook user, rather than a passive user.
This is from just one semester of data so far, but it’s a trend that we’ll be watching closely.
How is Pearson's AI-powered study tool different - or aiming to be different - than other LLMs / AI tools like ChatGPT, Bard, etc.?
Chris: Pearson's AI distinguishes itself by its precision and pedagogical focus. Unlike general LLMs, our AI is trained on the answer, which ensures pinpoint accuracy. Instead of just spoon-feeding students what they need to get their points and move on, we focus on guiding them to understand how to get there.
It is also designed with a commitment to enduring knowledge transfer, laying the foundation for academic success beyond immediate assistance.
Emily: Pearson’s AI study tools are plugged into and informed by the assignments and content that faculty chose for the course. We assumed instructors would value that, but we didn’t anticipate how important that would be for students, too.
The students could easily ask ChatGPT or Gemini for a summary or for practice questions, but they choose to ask the eTextbook AI study tool instead because they know that the response will be more relevant to their instructor, assignments, and success in the course.
What improvements or optimizations have been made to the study tool based on user feedback or data gathered during the beta period? Can you provide examples of how customer insights have influenced development?
Chris: Iterative refinement has been key. We’ve evolved our chat strategy based on robust interaction data, enhancing our approach with more layered questioning and adaptive feedback that we call “nudges” that make students think, but doesn’t just give them the answer.
This evolution is a direct outcome of feedback available in the product (thumbs up/down), instructor focus groups, and student interviews. We still have a backlog of additional things we want to do based on what we learned in these sessions!