Skip to content

Can Stanford’s AI Curriculum Reshape High School Education?

“It was especially important for us to emphasize that every student should feel empowered to participate in conversations about how AI is produced and used in society.”

Mike Taubman, Stanford Digital Education Fellow

Stanford Digital Education has unveiled an AI curriculum for high schools, integrating custom lesson plans with Google’s AI Essentials course.

Artificial intelligence is becoming a routine part of how people work, learn, and communicate. Yet for most U.S. high school students, formal education in how AI works, let alone how to use or evaluate it, remains limited, inconsistent, or entirely absent. In response, a team at Stanford Digital Education has developed a modular AI curriculum aimed at filling that gap.

The curriculum was created by Mike Taubman, a full-time high school teacher and Stanford Digital Education Fellow, alongside project innovation lead Michael Acedo and AI researcher Parth Sarin. The team designed the course to integrate into a wide variety of existing subjects and to be usable by any high school teacher—not just those with a background in computer science. Built in coordination with Google’s AI Essentials certificate program, the curriculum is currently being piloted in schools ranging from Newark, New Jersey, to Niagara Falls, New York.

The team’s objective is not to create a one-size-fits-all national standard, but rather to contribute a flexible, classroom-tested model to the broader conversation on AI literacy. As schools weigh how best to prepare students for a world increasingly shaped by automation and algorithmic systems, this effort reflects one possible approach: accessible, adaptable, and designed with both practical and ethical considerations in mind.

The curriculum’s early implementation offers a window into what it takes to bring meaningful AI education into public schools, particularly in under-resourced districts. More broadly, it raises questions about what kind of AI knowledge today’s students actually need, and how education systems might begin to deliver it.

We spoke with a Stanford Digital Education expert to better understand the details in development andthe potential of this new model for scaling responsible AI education.

Meet the Expert: Mike Taubman, Stanford Digital Education Fellow

Mike Taubman

Mike Taubman is a Stanford Digital Education Fellow and a full-time teacher and program director at North Star Academy’s Washington Park High School in Newark, NJ (part of the Uncommon Schools network). He’s been a teacher since 2005 and currently designs and leads the Summit program, a daily purpose-development and career-connected learning experience for 11th and 12th graders at North Star Academy.

In his role as an SDE Fellow, he is part of a three-person team developing and piloting SDE’s AI Literacy course. He also helped develop and pilot SDE’s moral philosophy course, “Searching Together for the Common Good,” which he’s taught every year at Uncommon Schools since 2022. In both of these courses, he serves as a professional mentor to teachers new to the SDE program. Mike earned a BA in English and an MA in teaching from Stanford and currently lives in Brooklyn, NY, with his wife and two daughters.

Developing an AI Curriculum with Local Context in Mind

When shaping the foundation of their high school AI curriculum, Mike Taubman and his team at Stanford Digital Education looked for a starting point that would be both credible and practical. They selected Google’s AI Essentials certificate course, a short, beginner-friendly program focused on real-world applications of AI as the backbone of the classroom experience.

“Google provided Stanford Digital Education with 100 free AI Essentials course licenses to distribute to our students, giving them the opportunity to earn an industry-recognized certificate at no cost to them or their districts,” Taubman explained. The certificate, which contains 10 to 20 hours of online content, was designed to be approachable for students from all backgrounds. Its flexibility also allowed teachers to adapt it to different classroom settings without sacrificing core learning outcomes.

Each week of the Google course introduces a new theme ranging from basic technical concepts to ethical use cases. Rather than replicate the content, Taubman’s team designed classroom materials that complemented the certificate’s structure while adding space for guided discussion, reflection, and local context. “Building on this framework, we expanded the curriculum to encourage deeper critical thinking and ethical exploration of AI,” Taubman said.

The team also wanted students to engage with AI tools not just as users, but as informed participants in a broader societal conversation. “We challenged students not only to use AI tools but also to question their outputs, recognize biases, and consider broader societal issues like labor exploitation and democratic participation in AI governance,” he said.

By integrating the certification program into a modular, teacher-led curriculum, the team created a bridge between practical digital literacy and the larger questions students are likely to face as AI becomes a more regular part of their lives.

Teaching Students to Think About AI, Not Just Use It

While Google’s AI Essentials course provides the structural foundation, the core of the classroom experience lies in how the material is brought to life. For Mike Taubman and his team, the goal wasn’t just to help students learn how AI works—it was to give them the tools to think critically about its role in the world around them.

That required moving beyond technical literacy to consider social context and ethical consequences. “We wanted to teach not only the technical basics of AI, but also about how bias and values are reflected in AI systems,” Taubman said.

To support that, the curriculum was designed for flexible integration across disciplines, from humanities to economics, making it accessible to students regardless of their academic track or prior experience.

The classroom lessons emphasized active participation and real-world connection. In one example, Taubman described using Google’s Teachable Machine to demonstrate how training data shapes AI outcomes. Students in his Newark classroom trained the tool to distinguish between a plastic water bottle and a pink highlighter. When the system later misidentified a black whiteboard eraser as the pink highlighter, the reaction was immediate. “Students were visibly taken aback,” he recalled. That misclassification became a springboard for discussion—about algorithmic reliability, real-world implications, and the social risks associated with flawed AI systems.

These activities serve a larger purpose: helping students think beyond the tools themselves and consider who is impacted by their design. “It was especially important for us to emphasize that every student should feel empowered to participate in conversations about how AI is produced and used in society,” Taubman said.

The curriculum encourages students to approach AI not just as future users, but as thoughtful participants in the systems that shape their world.

Closing the Gap: Delivering AI Education in Underserved Schools

Implementing a technology-focused curriculum in public high schools, particularly those in underserved districts, comes with practical challenges. As Mike Taubman and his team at Stanford Digital Education piloted their AI literacy course in schools like North Star Academy in Newark, they encountered a range of structural hurdles: limited device access, school network restrictions, and privacy-related barriers to using online platforms.

“You need to ensure that there are certain IT structures and permissions in place when you’re doing this kind of work with students in high schools,” Taubman explained. “Students need access to laptops, for example, to pursue the Google AI Essentials certificate.”

Even when devices were available, other issues emerged. Some districts had firewalls that blocked necessary websites or policies that restricted students’ use of AI-related tools. Privacy compliance also had to be addressed, especially when asking students to use external platforms tied to corporate providers.

According to Taubman, “Some schools may need workarounds because certain websites or platforms may be blocked.”

To navigate these obstacles, the team developed adaptive strategies that made the program more viable for different school environments. For instance, curriculum components were modular by design, allowing schools to use only what fit their infrastructure. In cases where the Google certification wasn’t immediately feasible, the team focused on discussion-based lessons and off-platform activities.

Despite the barriers, student interest has been high—particularly in schools where access to emerging technology education has historically been limited. “Due to resource constraints, students in Title I schools may be more likely than their peers in wealthier communities to fall behind the AI curve without the kind of balanced, ethical, and intentional interventions our curriculum provides,” Taubman said.

For his students, the course doesn’t just introduce AI—it signals that they have a place in the future it’s helping to shape. “AI is shaping our collective future, but students have the most at stake,” he said. “We owe it to students to ensure they all have access to the kinds of ideas that are in this curriculum.”

Evolving with the Field

As artificial intelligence continues to develop, so too will the expectations placed on educators to make its implications understandable—and relevant—for students. For Mike Taubman and his team at Stanford Digital Education, the curriculum now being piloted is not a finished product but a working model designed to grow with the field.

The team has no intention of mandating a single framework for AI education nationwide.

“Our goal isn’t nationwide adoption but to contribute ideas and perspectives to the broader AI literacy conversation that’s happening around high schools in the United States right now,” Taubman explained.

Their focus remains on iterative refinement through classroom use, ongoing teacher feedback, and exposure to different learning environments.

That flexibility has positioned the program to scale gradually, with several next steps under consideration. One possibility is to release the curriculum as an open-access resource, allowing educators anywhere to adapt and apply it. Another is a comprehensive teacher playbook that offers practical guidance for integrating the material into various subjects, accommodating technical limitations, and facilitating meaningful discussions about AI’s societal impact.

The team also anticipates that standards for AI education, whether at the state or national level, will eventually emerge. Rather than resist those changes, Taubman and his colleagues are building a curriculum that can evolve alongside them.

In its current form, the course provides a notable example of how schools can introduce students to AI in a thoughtful, inclusive, and grounded manner, reflecting real-world complexity. It demonstrates that responsible AI education doesn’t require a specialized lab or an advanced degree; it requires a willingness to ask critical questions, adapt to new challenges, and create space for students to see themselves in the systems shaping their future.

Chelsea Toczauer

Chelsea Toczauer is a journalist with experience managing publications at several global universities and companies related to higher education, logistics, and trade. She holds two BAs in international relations and asian languages and cultures from the University of Southern California, as well as a double accredited US-Chinese MA in international studies from the Johns Hopkins University-Nanjing University joint degree program. Toczauer speaks Mandarin and Russian.