AI and the Future of Learning: Key Strategic Takeaways for Global Education Systems
“AI should be a tool that raises the floor for access while raising the ceiling for rigor—not a shortcut that atrophies the cognitive muscles students need to develop.”
Alex Kotran, CEO & Co-Founder at aiEDU
Artificial intelligence now sits at the center of K–12 education debates. District leaders and policymakers are less focused on whether AI belongs in schools and more on how it should be governed, integrated, and sustained without compromising learning quality. As AI capabilities expand, pressure to act grows alongside concerns about cheating, data privacy, and uneven readiness across districts.
The U.S. education landscape reflects this tension. Federal action elevates AI education as a national priority, teacher organizations roll out large-scale training initiatives, and a small but growing number of schools experiment with AI-enabled instructional models. At the same time, district policies struggle to keep pace, resulting in uneven approaches ranging from cautious pilots to outright restrictions. The uncertainty is not simply technical. It centers on timing and design: whether delaying integration protects instructional integrity, or whether waiting leaves students unprepared for a world in which AI is already pervasive.
At stake is a broader question of readiness. Integrating AI into classrooms is not a procurement decision. It implicates instructional intent, educator capacity, governance, and equity. When AI advances faster than systems’ ability to define its role in curriculum and assessment, it risks fragmenting practice rather than strengthening it. Yet deferring action until policies are fully settled carries its own costs, as students engage with AI tools outside school and labor markets reorganize around new capabilities.
The path forward lies between haste and hesitation. Deliberate system design through clarifying instructional purpose, building professional capacity, and establishing guardrails that evolve with use shapes whether AI contributes to durable improvement rather than short-term efficiency gains. To better understand how education systems can navigate this balance, we spoke with Alex Kotran, co-founder and CEO of the AI Education Project, whose work focuses on helping districts define responsible AI readiness for classrooms preparing students for 2035.
Meet the Expert: Alex Kotran, CEO & Co-Founder at aiEDU

Alex Kotran co-founded aiEDU in 2019 after he discovered that Akron Public Schools, where his mom has taught for more than 30 years, did not offer any courses on AI.
Kotran began his career as a community organizer, working on President Obama’s 2012 re-election campaign, and as an appointee under HHS Secretary Sylvia Burwell, where he managed state-level outreach for the Affordable Care Act.
Kotran has a decade of experience in the AI space. In 2015 he joined Opower, one of the first companies to use machine learning to help utility companies reduce energy demand. In 2018, he was hired to launch the social impact arm of H5, a company that pioneered the use of language models in the legal sector. At H5, he built a global judicial training program with the National Judicial College and organized dozens of AI Governance summits with organizations including UNESCO, Stanford Law, NYU Law, and the European Commission.
Kotran is a second-generation immigrant from Akron, Ohio. He studied political science and government at the Ohio State University.
AI Readiness as a System Outcome, Not a Product Decision
For many districts, the first impulse is to treat AI as a product problem: identify approved tools, block the rest, and draft rules to manage misuse. Alex Kotran argues that this approach misdiagnoses the challenge. “The most important shift is recognizing that AI readiness is an outcome, not just a tool,” he says. In his view, procurement choices and bans can become a distraction from what schools are ultimately responsible for building.
Kotran frames the core question in terms of student preparation rather than classroom deployment. “The question isn’t ‘how do we use AI in schools?’—it’s ‘how do we ensure students have the knowledge and skills to thrive in a world where AI is everywhere?’” That emphasis shifts attention from tool adoption to curriculum design, teacher capacity, and the competencies students need to navigate AI beyond the school setting.
This is why Kotran resists confining AI instruction to specialized pathways. “AI literacy can’t live in a single elective or computer science track,” he says. “It has to be integrated across core subjects—math, science, English, social studies—so every student encounters it, not just those who opt in.” If AI shapes daily life and work across sectors, he argues, then schools cannot treat it as optional.
Kotran pairs AI literacy with what he calls the “human advantage.” He points to “critical thinking, collaboration, communication, creativity” as capabilities that become more valuable as AI handles more routine cognitive work. “These aren’t soft skills; they’re the skills that will command economic value when AI can handle routine cognitive work,” he says. The implication is that responsible AI integration should strengthen, not dilute, the cognitive and interpersonal demands of schooling.
Building that kind of readiness takes sustained system work. Kotran describes an AI Readiness Framework that defines competencies for “students, educators, schools, and districts,” alongside curriculum designed to integrate into existing instruction. He also emphasizes capacity-building mechanisms that outlast one-off trainings, including formal agreements with districts, cohort-based teacher fellowships, and policy advising aimed at long-term implementation rather than isolated pilots.
This framing clarifies what matters most in the near term. If AI readiness is an outcome, districts cannot measure progress by tool access alone. The more important question is whether teachers and students are building the knowledge and judgment to use AI intentionally. Focus also sharpens the cost of delay, particularly as students adopt AI outside school and districts try to catch up after patterns of use are already entrenched.
When Schools Are Not the Place AI Is Learned
As AI becomes embedded in daily life, a growing share of students’ interaction with these tools happens outside formal learning environments. Alex Kotran argues that this reality has consequences for the formation of norms, skills, and expectations around AI: “AI isn’t waiting for us to figure out our policies,” he says. “Students are already using these tools—at home, on their phones, often without any guidance.”
This shift places schools in a reactive position. Rather than shaping how students learn to use AI, districts risk responding after habits are already established. Kotran points to evidence suggesting that student use is more widespread than many educators assume. “A recent survey found that more students are using AI for schoolwork and companionship than most educators realize,” he notes. In the absence of structured instruction, students are left to infer appropriate use on their own, often without clarity about quality, accuracy, or limits.
The implications extend beyond academic integrity. Kotran situates this dynamic within broader labor-market changes already reshaping expectations for entry-level workers. “We’re seeing the largest gap between youth unemployment and overall unemployment in modern history,” he says, alongside layoffs and restructuring driven by AI adoption across industries. For students approaching graduation, the challenge is not simply exposure to AI tools, but understanding how to work alongside them with judgment and adaptability.
This context complicates the idea that waiting protects students. While governance frameworks evolve, students continue to encounter AI in informal, unstructured ways. Kotran stresses that preparation cannot be deferred to a later stage. “The students entering the workforce in five years need preparation now,” he says, “not after we’ve achieved policy perfection.” The risk is not premature exposure, but unmanaged exposure.
At the same time, Kotran is careful to distinguish between engagement and acceleration. He emphasizes that moving forward does not mean abandoning caution. “I’m not advocating for reckless adoption,” he says, underscoring the need for guardrails around data privacy, academic integrity, and age-appropriate use. The difference lies in whether those guardrails are informed by practice or developed in isolation.
Kotran points to districts that integrate AI gradually while refining governance alongside implementation. “You can move forward thoughtfully while policies evolve,” he says. In these cases, early use generates insight into what students and teachers need, allowing policy to be shaped by evidence rather than speculation. The alternative, he suggests, is a system that reacts to external change instead of guiding it.
Strengthening Instruction Without Weakening Thinking
The question of where and how AI belongs in classrooms ultimately turns on the quality of learning. Alex Kotran describes this as the central instructional tension facing schools. “This is the central tension, and I think about it constantly,” he says. The risk is not that AI exists in classrooms, but that it is used in ways that replace cognitive effort rather than deepen it.
Kotran frames the goal succinctly: “AI should be a tool that raises the floor for access while raising the ceiling for rigor—not a shortcut that atrophies the cognitive muscles students need to develop.” In practice, this means distinguishing between uses of AI that support thinking and those that circumvent it. Writing offers a clear example. “There’s a reason we teach writing,” he explains. “It’s not just about producing text, it’s about organizing thought. The act of writing is thinking.”
From this perspective, unrestricted use of generative AI can undermine learning if it replaces the processes students are meant to practice. “If students outsource that process entirely to AI, they lose the metacognitive benefits,” Kotran says. At the same time, he argues that AI can serve as a powerful learning support when used intentionally. Tools that help students brainstorm, receive feedback on drafts, or understand why an argument is not working can strengthen learning when aligned with instructional goals.
Intentionality, in Kotran’s view, is the deciding factor. “The key is intentionality,” he says. Teachers must be explicit about when AI use supports learning and when it bypasses it. That clarity cannot be assumed. It requires professional development that helps educators understand both what AI can do and where its limitations matter most for cognition and skill development.
This shift also has implications for assessment. Kotran argues that traditional formats often fail to capture how students think in an AI-enabled environment. He points to alternatives such as oral defenses of written work, collaborative projects, and process portfolios as ways to surface reasoning rather than output alone. These approaches allow students to use AI as part of learning while preserving spaces where independent understanding must be demonstrated.
Emerging research underscores the urgency of this careful balance. Kotran references recent findings suggesting that heavy use of large language models may reduce cognitive engagement and brain connectivity. “We have to take that seriously,” he says. The response, however, is not prohibition. “The goal isn’t to ban AI from classrooms,” Kotran explains, “it’s to use it in ways that amplify human capability rather than replace human effort.”
By centering cognition rather than convenience, Kotran argues that AI can support more demanding forms of learning rather than dilute them. This framing reinforces a broader theme running through AI integration debates: quality depends less on the presence of technology than on the instructional choices guiding its use.
Equity, Governance, and the Capabilities of a 2035 Graduate
Questions of instructional quality are inseparable from questions of equity and governance. Alex Kotran cautions that access to AI in K–12 education cannot be reduced to devices or software licenses. “Devices and software are necessary but nowhere near sufficient,” he says. For AI integration to be equitable, systems must address how educators are prepared, how curriculum is designed, and how communities are engaged in shaping what students learn.
Teacher preparation remains central. Kotran argues that without sustained support, even well-resourced initiatives falter. “You can put AI tools in every classroom, but if educators don’t understand how to use them effectively—or feel anxious about a technology that changes monthly—those tools will gather dust or be misused,” he explains. One-off trainings are insufficient. What is required instead is ongoing professional learning that builds confidence and instructional judgment over time.
Curriculum relevance is equally important. Kotran emphasizes that AI literacy must connect to students’ lived experiences and local contexts. Drawing on work in rural and Indigenous communities, he notes that “generic AI content doesn’t land the same way in a tribal school in Arizona as it does in a suburban district in California.” Effective integration depends on local educators shaping how AI concepts are taught so they resonate with community needs and values.
Equity also depends on reaching communities that are often left out of large-scale reform efforts. Kotran stresses the importance of proactive engagement rather than passive access. Through partnerships with local organizations, curriculum and training can be adapted and delivered in ways that reflect how communities actually learn. “The digital divide was never just about hardware,” he says. “The AI divide won’t be either.”
Looking ahead to 2035, Kotran identifies a set of capabilities that current classrooms underemphasize. The first is the ability to learn and relearn continuously. As tools evolve, specific technical knowledge becomes less durable than the capacity to adapt. “What matters is their ability to pick up new skills quickly, adapt when tools change, and stay curious when the ground shifts beneath them,” he says.
Judgment about when to use AI, and when not to, is another critical skill. Kotran argues that knowing how to prompt a system is a baseline competency. “The harder skill is knowing when AI output can be trusted, when it needs verification, when a task requires human judgment, and when to set the tools aside entirely,” he explains. Developing that judgment requires domain knowledge, critical thinking, and metacognitive awareness that schools are only beginning to prioritize.
Finally, Kotran points to the growing value of human-to-human collaboration. “AI can’t navigate the friction of real relationships—the miscommunication, the negotiation, the trust-building,” he says. As routine cognitive work becomes automated, interpersonal skills and collaborative problem-solving become sharper differentiators. Yet many classrooms remain organized around individual work and content delivery rather than shared inquiry and productive struggle.
AI’s expanding presence in classrooms brings long-standing questions about educational purpose into sharper focus. As schools look toward 2035, the challenge is not simply to incorporate new tools, but to design learning environments that prioritize judgment, collaboration, and resilience alongside technical fluency. When governance, curriculum, and assessment are aligned with these aims, AI can strengthen learning rather than narrow it. When they are not, even advanced technologies are unlikely to deliver lasting educational improvement.
