There is a particular worry that runs beneath all the other worries in this series — beneath the data collection, the algorithms, the predators, the deepfakes, the mental health crisis, the business models. It is the worry that we have already lost something that cannot be recovered. That the generation of children growing up inside AI-saturated environments has been shaped by those environments in ways that we will only understand decades from now, when the developmental consequences have fully arrived.
This worry is not irrational. The previous nine pieces have documented real and serious harms, and we have been honest about the fact that the institutions responsible for protecting children — regulatory, legislative, technological — have responded far more slowly than the systems doing the harm have moved. A child who grew up between 2012 and 2022 on recommendation-algorithm social media did not wait for the litigation to be filed before experiencing its effects.
But the worry, if held too tightly, becomes a trap. And the evidence — including the evidence we have reviewed in this series — suggests that the story is not simply one of irreversible damage. It is also a story about what children can learn when someone takes the trouble to teach them.
What AI Literacy Actually Means
The OECD and European Commission, in their 2025 AI literacy framework for primary and secondary education, organized the domain around four capacities: engaging with AI, creating with AI, managing AI, and understanding its design. These are not, at their core, technical capacities. They are critical capacities — ways of relating to a technology that give the user some degree of agency rather than being entirely acted upon.
California has gone furthest legislatively, with a law mandating AI literacy integration across mathematics, science, and history-social science. Required topics include how AI works, its applications, its limitations, and — crucially — its ethical dimensions and societal impacts. The National Literacy Trust found that 82 percent of teachers agreed students should be taught to engage critically with generative AI tools. The coverage, however, remains patchy, and teacher preparation is inadequate. The gap between what the evidence recommends and what children are actually being taught is wide.
For parents who cannot wait for the curriculum to catch up, the most important AI literacy concept to convey is not technical but phenomenological: what does it feel like to be acted upon by an algorithm? The recommendation engine feels like preference. The deepfake feels like a familiar voice. The AI chatbot feels like understanding. In each case, the feeling is real and the source is not what it appears to be. Teaching children to notice the gap between feeling and source — to ask, habitually, "is this real, and who made it, and why does it want me to feel this?" — is the beginning of media literacy in an age of intelligent machines.
Edutopia identifies five core moves for critical literacy in the AI age: sourcing information, checking consistency across multiple sources, lateral reading (opening new tabs to verify rather than reading deeply in a single source), examining who benefits from a claim, and practicing productive skepticism rather than blanket distrust. These are habits of mind, not skills — which means they develop through practice, through modeling, through repeated application across many contexts, rather than through a single lesson.
When to Start
Researchers from Harvard's Graduate School of Education emphasize that the basic habits — source evaluation, questioning, the idea that not everything you see is what it appears to be — can and should begin before kindergarten. Not in a technical way. In the way that all foundational things begin: through story, through conversation, through a parent pointing at something on a screen and saying, "I wonder who made this, and why they made it."
The MIT Media Lab finding that seven-year-olds attribute real feelings and personality to AI agents — and are therefore more susceptible to manipulation by systems presenting themselves as human — suggests that the window for building healthy skepticism is earlier than most parents assume. A seven-year-old who knows that the character in the educational app does not actually have feelings, and is not actually their friend, is not less charmed by the app. But they are developing a different relationship to it than the child who has been allowed to remain unambiguously trusting.
This is a delicate balance. Parents who create children who are suspicious of everything — who cannot trust any information source, who approach every digital interaction with corrosive cynicism — have not protected their children. They have created a different vulnerability. The goal is discernment, not suspicion. It is the capacity to distinguish, developed over years of guided practice, between the things that deserve trust and the things that are engineered to produce the feeling of trust.
The Hardest Conversation
The research on what families need to navigate this environment points, again and again, to relationship. Not technology, not policy, not software — relationship. The children who disclose when something goes wrong online are children who believe disclosure will be met with support rather than punishment. The children who develop critical habits of mind around AI and media are children in families where adults model those habits and invite them into the process.
This sounds abstract until you consider what it requires in practice. It requires a parent who is willing to sit with a teenager and say: I don't fully understand how TikTok's algorithm works, but I know it is designed to keep you watching. Can we look at this together? It requires a family where the phrase "I saw something online that scared me" does not produce panic or prohibition, but curiosity and conversation. It requires adults who have spent enough time in the environments their children inhabit to understand what those environments feel like — not just what they contain.
This is demanding. It competes with everything else that parenting demands. And it is made harder by the fact that the environments themselves are designed to resist this kind of scrutiny — to be so immediately engaging, so emotionally absorbing, that sustained reflection feels like work in comparison to just letting the feed continue.
But the parents who have found a way to do it — who have built the kind of relationship where their children bring them the hard things — have given their children something that no platform will ever provide and no regulation will ever guarantee: an adult who can be trusted, in a world where the most sophisticated systems ever built are optimized to produce the feeling of trust without its substance.
What This Series Has Tried to Do
We began, in Part 1, with the observation that AI has moved into family life without invitation and without disclosure. We examined, in Parts 2 and 3, what the systems know about our children and what they do with that knowledge. Parts 4, 5, and 6 documented the most serious harms: the mental health crisis, the synthetic media threats, the new architecture of child exploitation. Part 7 turned to education, Part 8 to the business models underneath everything, and Part 9 to what the evidence shows about protection.
What runs through all of it is a single uncomfortable truth: the gap between the sophistication of the systems acting on our children and the awareness of the families those children belong to is the problem. Not the technology itself, which is genuinely neutral in ways that its applications often are not. Not the children, who are doing the best they can inside environments they did not design. But the gap — the information asymmetry between what platform companies know about what their systems do, and what parents know — that is where the harm lives.
Closing that gap is the work. It is parenting work, and advocacy work, and regulatory work, and educational work, and it will take longer than any of us would like. But it is not impossible. The families that navigate this environment most successfully are not the ones waiting for the systems to change. They are the ones that have decided not to wait.
The harder question — the one this series cannot answer, because no one can answer it yet — is whether enough families will decide not to wait before the consequences of having waited become irreversible. That is the question worth carrying forward from everything we have examined here.
This is Part 10 of "Raising Children in the Age of Intelligent Machines," a 10-part series from PeopleSafetyLab on the intersection of AI and family safety.
About This Series
"Raising Children in the Age of Intelligent Machines" is a 10-part investigation into the intersection of artificial intelligence and family safety, published by PeopleSafetyLab. The full series:
- The Uninvited Guest
- The Shadow Your Child Casts Online
- The Machine That Knows Your Child Better Than You Do
- The Anxiety Engine
- The Synthetic World
- Predators in the Machine Age
- When Machines Teach
- What the Tech Giants Won't Tell You
- Building Your Family's Firewall
- Raising Wise Children in an Intelligent World