Google has announced plans to roll out its Gemini artificial intelligence (AI) chatbot to children under the age of 13. The launch, set to begin in the coming week, will initially be available in the United States and Canada through Google’s Family Link accounts.
While this expansion increases access to AI tools, it also introduces serious concerns. Even if children are restricted from social media, parents may still find themselves battling new technologies in an ongoing effort to safeguard their kids. An article published on The Conversation website warns that a digital duty of care for tech giants like Google has to be urgently implemented to address these emerging challenges.
Family Link allows parents to manage children’s access to content and apps, including YouTube. Creating an account for a child requires inputting personal data, such as their name and birth date, which raises potential privacy issues. Google claims, however, that children’s data won’t be used to train the AI system.
Notably, access to the chatbot is enabled by default. This means parents must manually disable it if they want to restrict use. Young users will be able to interact with Gemini through text prompts and image generation. Google admits that the system may “make mistakes,” which makes evaluating the reliability of its responses essential. Since chatbots can produce incorrect or fabricated information—known as “hallucinations”—children using the tool for homework or research will need to verify content with trusted sources.
What type of information can children expect?
Unlike search engines, which provide original materials for review—such as news articles or academic sources—generative AI works differently. It scans for patterns in data and produces new responses or images based on user prompts. For instance, if a child asks it to “draw a cat,” the system will generate an image using visual patterns like whiskers, ears, and tails. However, understanding the distinction between search engine results and AI-generated content is difficult for children. Studies show that even adults and professionals, such as lawyers, have been misled by AI-generated misinformation.
While Google says Gemini includes safeguards to prevent inappropriate or harmful outputs, these filters might create unintended issues. Blocking words like “breasts” to avoid exposure to sexual content could also hinder access to accurate, age-appropriate health information, such as learning about puberty.
Moreover, many children are adept at using technology and often find ways to bypass system restrictions. Parents must go beyond relying on built-in protections. They should review AI-generated outputs and actively help children evaluate accuracy and understand how the technology functions.
What specific risks do AI chatbots pose to children?
In Australia, where the Gemini chatbot is expected later this year, the eSafety Commission has issued a safety advisory. It warns that AI systems—especially those simulating personal relationships—can distribute harmful content, blur reality, and offer dangerous advice. Young users, the advisory notes, may lack the critical thinking skills to recognize or respond appropriately to manipulation by a machine.
The author of this article recently studied various AI chatbots, including ChatGPT, Replika, and Tessa. These systems imitate human interaction by mirroring emotional and social norms—what psychologists call “feeling rules.” These are the same unwritten expectations that prompt us to say “thank you” or “I’m sorry.” By replicating such behaviors, chatbots are designed to win user trust.
This human-like interaction can confuse children, who might think they’re speaking to a real person or accept misleading content as truth. The illusion of social connection adds a layer of risk, particularly for young users who aren’t developmentally prepared to differentiate between human and machine responses.
What can be done to protect children when using AI chatbots?
Some countries are moving to ban social media use for children under 16. While such policies may seem protective, they don’t account for risks posed by generative AI tools like Gemini, which are not categorized as social media. Parents and children alike must be equipped with digital literacy skills and understand how to safely and responsibly use AI technologies. Restricting access alone won’t solve the problem—the key lies in education and awareness.
By Nazrin Sadigova
Source: caliber.az