Alphabet Inc.’s Google faces demands from child development experts to ban videos created with artificial intelligence from being shown or recommend to young viewers on YouTube and YouTube Kids.
More than 200 children’s specialists, advocacy groups and schools sent a letter to Google CEO Sundar Pichai and YouTube CEO Neal Mohan on Wednesdayexpressing concern over what they see as a lack of substantive content in many AI-generated YouTube videos that are presented as educational. In the letter, advocates also criticized the alleged low quality of children’s content churned out by AI generators, and the rise of creators on Google’s YouTube video service using artificial intelligence to create clips intended to profit at the expense of the world’s youngest and most impressionable viewers.
Child safety advocates fear that AI-generated material, sometimes called “AI junk,” will affect children’s attention spans and their ability to distinguish between what’s real and what’s not real. They further argue that screen time is replacing real-world activities that are essential for children’s emotional and social development.
“There is a lot we don’t know about the consequences of AI-generated content for children,” the group wrote. “YouTube participates in this uncontrolled experiment by promoting AI-generated content without research demonstrating its benefits and without taking into account the principles of child development that tell us that it is probably mostly harmful.”
The letter was signed by social psychologist Jonathan Haidt, whose best-selling book. The anxious generation fueled a global movement to combat the harms that social media and smartphones cause to young people, as well as by children’s rights groups such as Fairplay and the National Alliance to Advance Adolescent Health. The American Federation of Teachers and several schools also signed it.
“We have very high standards for YouTube Kids content, including limiting AI-generated content in the app to a small group of high-quality channels,” YouTube spokesperson Boot Bullwinkle said in an email, adding that parents also have the option to block channels. “At YouTube, we prioritize transparency when it comes to AI-generated content, labeling content from our own AI tools and requiring creators to disclose realistic AI-generated content.. “We are always adapting our approach to stay current as the ecosystem evolves.”
AI-generated videos have become increasingly popular on YouTube, especially those aimed at babies and toddlers. Some creators have found that outsourcing this work to an AI system makes it much easier and cheaper, and have even started sharing tutorials on how to create a business based on producing videos for babies and toddlers. (Bullwinkle stated that “mass producing low-quality content is not a viable business strategy on YouTube, as our monetization systems and policies are designed to penalize this type of spam.”)
In January, Mohan said that “managing AI errors” and “ensuring YouTube remains a place where people feel comfortable spending their time” is a top priority for the company in 2026. However, YouTube has also argued that not all content created with AI is poor quality and that, when done correctly, creating with AI can even be positive.
YouTube requires creators to label “altered and synthetic content.” Proponents argued in the letter that these labels are “unlikely to be understood by preliterate children, who are the targets of much of this AI-generated garbage.”
In March, Google announced an investment in Animaj, an AI animation studio focused on creating children’s content on YouTube, as part of an effort to improve the quality of its offering for younger users. A Google executive involved called it “a true model for the future,” while child safety advocates criticized Google and Animaj for involving “babies and toddlers who shouldn’t have any screen time.” They urged YouTube to stop “all investment in the creation of AI-generated videos for children.”
Wednesday’s letter came at a time when there are other outside efforts to change how YouTube works. In March, a landmark jury trial into social media addiction found Google and Meta Platforms Inc. liable for harming a young user with products designed to keep her hooked. Both companies announced that they would appeal the verdict. However, plaintiffs, consumer advocates and lawmakers are pressuring the companies to modify some of their most lucrative operational functions, including their content algorithms.

