Skip to content

Is Duolingo’s AI-First Course Expansion the Future of EdTech or a Warning Sign?

“If you’re optimizing for engagement as opposed to optimizing for student learning… that can lead you to places that maybe are detrimental.”

Michael Trucano, Nonresident Fellow – Global Economy and Development, Center for Universal Education, Brookings Institution

Duolingo launched 148 new courses created using generative AI, marking one of the most aggressive moves yet in AI-driven education. It is a significant pivot for the popular language learning platform, which built its reputation on a playful user experience and gamified lessons. And its “AI-first” development signals more than product expansion. It raises deeper questions about how educational content is created, who creates it, and what may be lost when scale becomes the priority.

The logic is straightforward. AI accelerates content production, enabling platforms to reach more users in multiple languages with reduced human input. But as these tools become central to how lessons are designed and delivered, concerns are growing about quality, cultural relevance, and the role of educators. Can AI match the value and nuance of lessons crafted by human experts? And if not, why are companies so quick to rely on it?

What is happening at Duolingo reflects a broader shift in education technology. Increasingly, platform performance and efficiency drive product decisions, even when those decisions affect how people learn. For Michael Trucano, nonresident fellow at Brookings and longtime advisor to education ministries around the world, this shift brings real consequences. The future of AI in education depends not only on innovation but on thoughtful decisions about how and why it is used.

Meet the Expert: Michael Trucano, Nonresident Fellow – Global Economy and Development, Center for Universal Education at the Brookings Institution

Michael Trucano

Michael Trucano is a nonresident fellow with the Center for Universal Education at the Brookings Institution, where he explores issues related to effective and ethical uses of new technologies in education. Current areas of inquiry include the use of digital educational credentials, generative artificial intelligence in education, and, more broadly, emerging edtech policies, initiatives, and institutions after the pandemic.

Trucano was previously at the World Bank, where he provided policy advice, research, and technical assistance to governments seeking to utilize new technologies in their education systems for 26 years. In this role, he served as an adviser to, evaluator of, and/or participant in large-scale educational technology initiatives in over 70 countries, including China, India, South Korea, Uruguay, the United States, and numerous countries across Africa, the Middle East, and Eastern Europe. He most recently served for eight years as the World Bank’s global lead for technology and innovation in education, co-founding its edtech team. This team coordinated assistance to governments worldwide as they deployed remote learning programs in response to extended school closures during the Covid-19 pandemic.

A Model Built for Scale, But at What Cost?

Duolingo’s decision to lean into generative AI is driven by scale. The platform’s leadership made this clear when announcing their “AI-first” goal of creating large volumes of content across numerous language pairs.

As Michael Trucano puts it, “Duolingo is all about scale. We need to create a massive amount of content, and doing that manually doesn’t scale,” Trucano explains, referencing the company’s messaging around the launch.

This is the logic behind the AI-first shift. Rather than relying on language specialists to painstakingly design every lesson, Duolingo can now rapidly produce course material for combinations like Spanish to Norwegian or German to Hindi. It is an approach that aligns well with platform growth, user acquisition, and investor interest. But this very alignment reveals a key consideration: who the company is really building for?

Trucano points out that the messaging around this rollout seemed primarily directed at staff and shareholders, not the learners themselves. “The audience was his employees. I also assumed that the audience was investors or potential investors… and just the immediate, almost visceral reaction of so many learners to say, you know, this is something we like using… and this just doesn’t seem right to us.”

This mismatch between corporate strategy and user backlash is not unique to Duolingo.

“I have noticed a number of companies in the AI edtech space… looking to speak to investors and their employees. And then their customers slash their learners sort of third.” It reflects a broader shift in how education platforms operate, where pedagogical value can become secondary to growth narratives.

The rollout of AI, accompanied by a boom, introduces the question of how efficiency-driven design can potentially come at the cost of user trust and long-term learning impact.

AI Slop and the Risk of Quantity without Context

The promise of generative AI in education is volume and “user-specific” learning. But that volume brings its own risks, as well as a potential lack of user context when tailoring to users globally. As companies race to create vast libraries of lessons, vocabulary drills, and grammar exercises, a core question emerges: who is ensuring that any of it is usable, accurate, or appropriate?

Trucano does not mince words on this point. “The first one is the obvious one,” he says, “there is the potential just to create a lot of AI slop. And I certainly see a lot of that.”

The term captures a growing concern among educators and policymakers: that AI-powered platforms may flood learning systems with material that appears sophisticated on the surface but fails to meet meaningful standards for instructional design.

This concern becomes especially pressing for governments and institutions with limited capacity.

“Talk to people in ministries of education who have startups coming to them and have said, we can make available to you learning activities that touch on every one of your curricular objectives in your language. And ministers say, what? I mean, they just can’t deal with that, and the quality control on that is super difficult.”

Without a mechanism for vetting, institutions are left with a deluge of content and no clear way to assess its fit. Even when the source material is open and legally available, deeper issues for user context and appropriateness remain.

“Let’s say that they’re doing it from open educational resources that were created in one place, in one language, with a whole set of assumptions behind them. And they’re saying, well, we can just translate them and have some sort of mapping to a local curriculum… maybe. Maybe not.”

AI-generated content may replicate grammar rules or vocabulary banks, but it often fails to reflect the lived experience, classroom norms, or pedagogical expectations of different cultures. In other interviews, many experts suggest it should be used as a complementary tool rather than a replacement, which seems to be the case with Duolingo’s “AI-first” approach. The company declined to provide a comment.

Trucano emphasizes that translation is not the same as contextualization. “Actually making sure the materials are contextually relevant and map to learning objectives – that is a real concern.”

What looks like efficiency on paper can collapse in practice if the educational system receiving the material lacks the tools, time, or staff to tailor it to local needs.

“The challenge,” Trucano adds, “is that they don’t have tools on the other side [to support quality control and review].”

In rushing to build at scale, platforms risk overwhelming the very systems they aim to serve, or failing to meet localized needs, contexts, and appropriateness for user-learning profiles, ages, cultures, and objectives.

When Gamification Isn’t Always Learning

Duolingo’s model is built around engagement. Short lessons, streak rewards, and animated nudges from its owl mascot are all designed to keep users returning. On the surface, this gamification helps reinforce learning habits. But as the company moves deeper into AI-driven content, and engagement becomes a central design objective, the difference between habit and mastery, or true learning, becomes harder to define.

Trucano articulates the platform’s rationale: “Luis von Ahn says, and he’s not wrong, for us to have impact at Duolingo, first, we need engagement. If they don’t use our app, they won’t learn from our app.” This framing reflects a broader shift in edtech, where sustained usage is treated as a lead to educational value.

But there are limits to that assumption.

“We’ve had over a decade of experience in promoting user engagement materials in social media that haven’t led to lots of good things,” Trucano notes. “If you’re optimizing for engagement as opposed to optimizing for student learning… that can lead you to places that maybe are detrimental.”

This tension becomes more pressing in an AI environment. Tools trained on engagement data may prioritize content that captures attention, not necessarily what supports cognitive growth. Flashy, fast, and frequent interactions can feel satisfying to the user without resulting in real skill development.

The concern is not that gamified elements are inherently bad, but that they can become the goal rather than the method. When design choices reward speed or repetition over challenge and comprehension, learning risks being flattened into entertainment. For Trucano and others, this is where the core danger lies. Optimizing for engagement may drive usage numbers, but without clear standards for measuring progress, it becomes increasingly difficult to determine whether learners are actually learning.

Policy, Research, and Human Judgment Still Matter

As AI tools begin to reshape how educational content is created, delivered, and scaled, the question is no longer whether change is coming. It is how institutions respond to and/or guide it.

“The challenge is coming—like it or not,” Michael Trucano says.

With this in mind, policymakers, educators, and platform designers are being pushed to reconsider what students should learn, how they should be assessed, and where teachers remain essential.

Part of that response will involve infrastructure that does not yet exist. Many education systems still lack structured, localized datasets that can inform responsible AI development.

“We need much more curated, structured data sets of relevance to education,” Trucano explains. Without that foundation, AI models will continue to rely on materials from a narrow set of contexts. Even well-trained systems risk reinforcing foreign assumptions about how learning should look.

This moment also requires patience. “It’s tough if you’re researching something that’s changing so quickly,” Trucano says. Many AI tools are being deployed before their impact can be thoroughly evaluated, making it difficult for schools or ministries to make informed decisions. At the same time, companies are generating enormous volumes of what Trucano calls “digital exhaust”—data from teaching and learning processes that, if analyzed properly, could offer new insights. “There are opportunities perhaps to learn from some of that.”

Ultimately, efficiency and scale alone do not guarantee learners’ improvement. In some cases, it may simply offer a cheaper way to deliver the same… or perceived results. The real challenge lies in knowing when speed is useful and when it compromises the deeper values education is meant to protect.

Chelsea Toczauer

Chelsea Toczauer is a journalist with experience managing publications at several global universities and companies related to higher education, logistics, and trade. She holds two BAs in international relations and asian languages and cultures from the University of Southern California, as well as a double accredited US-Chinese MA in international studies from the Johns Hopkins University-Nanjing University joint degree program. Toczauer speaks Mandarin and Russian.