Is AI Making Education Better or Worse?

Every generation gets a technology that was supposed to fix education. Television in the 1950s. Personal computers in the 1980s. Tablets in the 2010s. Each time, the promise was enormous. Each time, the children who benefited most were the ones who already had the most.
Now it is artificial intelligence. And the central question of AI in education equity is whether this time will be different.
The honest answer is: it could be. But only if the people building these tools actually think about the kids who have the least, not just the ones who are easiest to reach.
What AI Can Do That a Classroom Cannot
The single most powerful thing AI brings to education is personalisation at scale. A good human tutor adapts in real time, adjusts difficulty, notices confusion, and meets a student exactly where they are. That kind of one-to-one attention has always been a luxury. AI can make it available to anyone with a phone.
The evidence is starting to back this up. A randomised trial in India found that students using adaptive learning software progressed at more than twice the rate of their peers in traditional classrooms. A pilot programme in Nigeria produced learning gains in just six weeks that exceeded 80% of education interventions typically used in developing countries. These are not marginal improvements.
AI also crosses language barriers in a way that printed textbooks and standardised curricula never could. A child whose first language is Swahili or Hausa no longer has to learn through the filter of French or English. The tools can meet them where they are, literally. That matters enormously for the communities we work with.
It is part of why we support tools like Kalulu, a free mobile app developed at the Collège de France that uses research-backed methods to improve children’s reading speed and comprehension. Kalulu is expanding to Swahili, Portuguese, Spanish, and English, because the children who most need better literacy tools are rarely the ones being designed for first.
The Risk to AI in Education Equity That Nobody Wants to Talk About
Here is the uncomfortable part. AI in education equity is not automatic — without deliberate effort, AI does not help the most vulnerable children. In many cases, it risks making things worse.
The Brookings Institution puts it plainly: privileged students tend to use AI to enhance their thinking, while disadvantaged students risk using it to replace it. The same tool, two completely different outcomes, depending on the context and support around it. A child with a confident teacher, a stable internet connection, and a device of their own will get something very different from an AI tutor than a child sitting in an overcrowded classroom in a building with intermittent power.
There is also a deeper problem. Almost all major AI systems in education are built in the Global North, trained on data that largely reflects the Global North, and sold into markets where returns are highest. UNICEF warns that without deliberate investment in bridging the digital divide, and without tools that genuinely reflect different languages, cultures, and contexts, AI could entrench educational inequality rather than reduce it.
Some researchers have raised a subtler concern: that passive reliance on AI can gradually erode the critical thinking and problem-solving skills that education is supposed to build. A child who is always handed an answer never quite learns how to find one. The risk is not that AI is unintelligent. It is that it might make us less so.
What AI in Education Equity Actually Looks Like
The projects that seem to work share a few things in common. They start from the child, not the technology. They involve teachers as partners, not as obstacles. And they are built for the specific context they are trying to serve, not adapted from something designed for a different world.
No Black Boxes, a nonprofit we support, is a useful example of this. Rather than handing students a finished AI system to interact with, their approach sends low-cost kits to students in over 60 countries so they can physically build the technology themselves — microscopes, brain circuits, sensors. The point is not to use the tool. It is to understand it. No technology should feel like magic when you can understand how it actually works.
That distinction matters. Children who understand how AI works are in a very different position to children who simply consume it. One group has agency. The other has a dependency.
The World Bank frames it as a question of enabling conditions: infrastructure, connectivity, devices, and above all, teachers. AI can close the gap between the quality of education a rich child and a poor child receive, but only if the human layer around it is strong enough to make it work. Without that layer, the gap widens.
The Question Worth Asking
AI will be part of how the next generation learns. That is already decided. What is not decided is who gets to benefit from it, and on whose terms it is built.
The children we work with in Madagascar, Zimbabwe, Rwanda, and elsewhere are not waiting for a perfect tool. They are sitting in classrooms right now with teachers who are overloaded, with textbooks that are years out of date, and with a genuine hunger to learn. AI, used well and built honestly for their context, could genuinely change what is possible for them.
But “used well” is doing a lot of work in that sentence. It requires investment in the infrastructure most of these communities are still missing. It requires tools that speak their languages, reflect their realities, and treat them as the intended users, not an afterthought. And it requires putting teachers at the centre of any serious attempt to make it work.
AI in education equity will not happen by default. Technology has never fixed education on its own. People have. The best role AI can play is to give those people more time, more insight, and more reach. Whether that happens depends less on the technology than on the choices we make about who it is actually for.