Skip to content

Beyond Answers: What ChatGPT’s Study Mode Tells Us About the Future of AI in Learning

“Tutoring isn’t just being able to answer a question… it goes way beyond that.”

Amanda Bickerstaff, Co-Founder and CEO of AI for Education

OpenAI’s release of study mode signals a deliberate attempt to reshape how students interact with generative AI. Rather than delivering answers directly, the feature introduces scaffolded prompts, concept checks, and adaptive questions meant to guide learners through the material. The design reflects input from teachers and cognitive scientists, aiming to shift AI’s role from a quick-response tool to one that mirrors some of the principles of good pedagogy.

The launch comes at a time when classrooms are already experimenting with AI. Across the United States and internationally, students are using tools like ChatGPT to draft essays, solve math problems, and generate study notes. Educators have raised concerns about overreliance, accuracy, and the impact on critical thinking. At the same time, many acknowledge AI’s potential to extend personalized support to students who might not otherwise receive it. Study mode attempts to navigate these tensions by offering structure rather than shortcuts.

Yet the introduction of such a feature raises questions that go beyond product design. Can AI take on more instructional behaviors without eroding the essential role of teachers? Will students learn to use these systems responsibly, or will they outsource their learning altogether? How should schools adapt professional development to prepare educators for a classroom where AI is no longer a novelty but a daily presence?

To better understand what teachers need to know about study mode and educators’ perspectives on AI literacy, we spoke with Amanda Bickerstaff, the Co-Founder and CEO of AI for Education.

Meet the Expert: Amanda Bickerstaff, Co-founder and CEO of AI for Education

Amanda Bickerstaff

Amanda Bickerstaff is the co-founder and CEO of AI for Education, focused on providing AI literacy training to one million educators and leading the responsible adoption of generative AI in the education ecosystem to empower teachers and ultimately improve student outcomes while preparing them for the future.

Bickerstaff is a former high school biology teacher and edtech executive with over 20 years of experience in the education sector. She has a deep understanding of the challenges and opportunities that AI can offer. She is a frequent consultant, speaker, and writer on AI in education, leading workshops and professional development across K12 and higher Ed. Amanda is committed to helping schools and teachers maximize their potential by adopting AI ethically and equitably.

Study Mode: Ambition and Early Limits

“Study mode” was introduced by OpenAI as a way to make ChatGPT more useful for learners. The company described the feature as developed with input from teachers and cognitive scientists, designed to help students work through problems rather than simply receive answers. It represents a step toward aligning AI tools with classroom practices, signaling an intent to position generative AI as an instructional partner rather than only a productivity tool.

Educators working with AI in schools have offered mixed assessments of this first version. Bickerstaff points out that the change is not as substantial as it might appear: “I think that study mode is not that different because it’s actually just a system prompt and one that is not deeply pedagogical; it hasn’t really changed the ways in which ChatGPT works,” she explained. In her view, while the intent is clear, the current implementation does not yet reflect the deeper instructional design needed for tutoring.

Bickerstaff also noted that OpenAI’s move must be understood in the context of a broader industry trend: “It’s interesting to see how far Google Anthropic and OpenAI are pushing into the education space. There is clearly a signal that students are using these tools,” she said. She highlighted Google’s LearnLM as an example of a model fine-tuned specifically for learning tasks, offering a more robust though still limited attempt at pedagogy.

The launch of study mode is therefore less significant for its technical design than for what it represents: an acknowledgement that AI is moving closer to the core of how students learn. Even if the current version is modest, it signals a competitive push among major technology firms to capture the education market and define how AI integrates with pedagogy. That shift places new pressure on educators, who must now interpret these tools for classrooms and determine how to use them responsibly.

Literacy, Usability, and Gaps

For educators, the introduction of study mode is less about the feature itself than about how teachers and students learn to use generative AI responsibly. The core challenge, Amanda Bickerstaff explains, is that these systems are not intuitive.

“Most people have never interacted with technology where they really have control,” Amanda says. “The ways in which you ask questions, get feedback, and interact with generative AI models really deeply impact the quality of what you get back.”

She points out that unless teachers understand how prompts shape outputs, the results are often generic and not suited to classroom needs. “You can have the most simple lesson plan, and it’s going to give you—because of the way that the tools are designed—an answer, but it’s going to be very generic. It’s not going to be fit for purpose.”

This gap in usability ties directly to literacy. Bickerstaff argues that many educators and students do not realize how often these tools make mistakes or produce inaccurate information.

“People generally do not understand just how many times these tools make mistakes, hallucinations, and inaccuracies,” she details. “They can be both very easy to spot and very difficult to spot, so that makes it so important for the critical evaluation component.”

Rapid development cycles add to the challenge. New models emerge quickly, leaving many teachers feeling as though they cannot keep up. Even when they adopt one version, updates can shift usability. As Bickerstaff observes, “They feel like they can’t keep up. And even if they do have AI, when they don’t feel like it’s going to translate when things like GPT-5 come out.”

These dynamics highlight the need for professional development that goes beyond technical training. For teachers to use tools like study mode effectively, they require AI literacy that combines practical skill, critical evaluation, and an understanding of both the limits and potential of the technology.

From Tool Use to True Literacy

While younger generations often adopt new technologies quickly, familiarity does not equal literacy.

Amanda Bickerstaff cautions against assuming that students’ ability to use generative AI translates into meaningful or responsible learning. “We cannot say that tool use equals literacy. Just because a young person can use generative AI models, it does not mean they’re AI literate, and I think that is a massive misconception.”

This gap has clear consequences in classrooms. Many students accept the first response a system provides, copy it directly, and bypass the critical thinking that learning requires. “They’re using it in the worst possible way,” she describes. “They are accepting the first output. They are cutting and pasting. They’re not evaluating. They’re not using it for things that are actually helpful to their learning. They’re essentially cognitively offloading.” In other words, the tool becomes a shortcut rather than a support, arguably limiting learning across vital growth and development stages.

The risks extend beyond academics. In her work with schools, Bickerstaff has seen growing concern about how students turn to AI systems for personal or emotional support. She pointed to a high-profile case involving a teenager who misused a chatbot, underscoring the urgency of stronger safeguards. “Artificial companion kinship and digital well-being is a huge issue right now… young people are using these tools for friends, girlfriends, boyfriends, without realizing that they are not real. These tools are designed to feel like they’re real.”

The combination of academic shortcuts and emotional reliance raises important questions about student safety and development. Without structured guidance, students risk replacing genuine learning and relationships with automated responses. Bickerstaff argues that intentional AI literacy programs can counter this trend by teaching students how to evaluate outputs, apply them judiciously, and recognize the boundaries between machine responses and human experience.

Her organization is now extending its mission to include students directly, offering free courses on AI literacy designed to help young people approach the tools thoughtfully. This shift reflects a growing recognition that if AI becomes part of everyday learning, students must be equipped not just with access but with the skills and judgment to use it responsibly.

Building Responsible AI for Learning

Study mode highlights how quickly AI is moving from novelty toward becoming part of everyday learning. Its current design may be limited, but the signal is clear: major technology firms are beginning to shape tools that look beyond productivity and toward pedagogy. For educators and policymakers, the challenge now is ensuring that these tools develop in ways that genuinely support learning.

Responsible AI in education will require more than technical updates. Developers must work closely with teachers to embed practices that reflect how students actually learn, from scaffolding concepts to encouraging reflection and critical thinking. Schools will need to prioritize AI literacy so that both educators and students understand not just how to use these tools, but how to evaluate them and recognize their limits.

The arrival of study mode also points to a broader future where AI systems take on more instructional roles. Whether that future strengthens or undermines learning will depend on choices made today about safeguards, usability, and pedagogy. As Amanda Bickerstaff observed, “Tutoring isn’t just being able to answer a question… it goes way beyond that.”

What is at stake is not simply how quickly students can get answers, but whether AI becomes a force for deeper understanding and wider access to quality education.

Chelsea Toczauer

Chelsea Toczauer is a journalist with experience managing publications at several global universities and companies related to higher education, logistics, and trade. She holds two BAs in international relations and asian languages and cultures from the University of Southern California, as well as a double accredited US-Chinese MA in international studies from the Johns Hopkins University-Nanjing University joint degree program. Toczauer speaks Mandarin and Russian.