Tuesday, March 17, 2026

‘A child’s companion should be another child, not AI’: Jo Barnard on AI toys

by Carbonmedia
()

Post Content
“Children are still developing socially and cognitively.” And that is why, Jo Barnard, founder of the London-based design company Morrama and member of the UK Design Council, believes the new category of AI-powered toys are an emerging area of concern.  
From conversational companions to emotion-sensing gadgets, these toys promise smarter play for children. However, researchers and designers contend that the technology may be arriving even before we fully comprehend its impact on children. Barnard is a key figure at the forefront of this conversation through her Mindful AI initiative, a framework that prescribes restraint, context, and child-centric design. 
In an interview with indianexpress.com, she explained why one of the biggest misconceptions in designing AI for kids is treating them like scaled-down adults. “They learn by mimicking and can absorb new behaviours very quickly. They’ve got no way of critically thinking about whether something is fact or fiction or socially appropriate.” This, Barnard feels, makes even basic interactions with AI fundamentally different. The designer explained that voice recognition systems continue to struggle to interpret children accurately, and when combined with their developmental stage, the risks could multiply.

AI toys misreading emotions
AI toys and their inability to respond appropriately to children’s emotions have stirred concerns. A recent Cambridge University study revealed how some AI systems misread distress or offered dismissive responses. According to Barnard, this is not a minor flaw; rather, it points to a deeper limitation. “Intelligence without context can be dangerous,” she said, warning that such interactions can “confuse their social development”. 
Also Read | Amid wave of kids’ online safety laws, age-checking tech comes of age
At the same time, overly empathetic responses can also backfire. Children don’t need constant validation or emotional deep-dives. “Kids can be sad one minute and happy the next,” she explained, emphasising the need for balance rather than extremes.
Most of the AI toys today are much more than pre-programmed responses. These furry AI-powered toys can listen, interpret, and generate new answers in real time, leading to something that feels real, or a responsive companion. This, Barnard feels, is precisely the problem. “It’s hard to understand what the purpose of that is. It’s a companion. But as a child, their companions should be kids of their age.”
However, unlike human relationships, AI companions are designed to be patient, agreeable, and engaging at all times. Barnard revealed that this can lead to unhealthy attachment and distorted expectations of real-world interactions. “You can be horrible to a chatbot and it will still want to love you… That doesn’t work in reality.”  

Story continues below this ad

AI companions are designed to be patient, agreeable, and engaging at all times. (AI-generated image for representation: Gemini)
Beyond emotional risks, there are growing concerns about cognitive development. This is mostly since AI systems are often designed to maximise engagement, pushing users to keep interacting. For children, this may create dependency. “There becomes a dependency… it’s very, very difficult for them to stop,” Barnard said.  According to Barnard, More worrying is the impact on thinking itself. Studies have shown that over-reliance on AI tools can reduce cognitive effort. For children whose brains are still developing, this could mean those abilities never fully form. “If they’re offloading their thinking… those parts of their brain just stop developing,” she explained.
The Context Gap
At the heart of these issues surrounding AI toys is what Barnard calls the ‘context gap’. The real world is complex, nuanced, and unpredictable. Humans learn to navigate it through their lived experience. On the contrary, AI operates on limited inputs. “It cannot possibly have access to all of the stimulus around a child,” she said, meaning its responses are often incomplete or inappropriate. The entrepreneur feels that this not only risks poor guidance but also reduces opportunities for creativity and independent problem-solving, both of which are core elements of childhood.
Since AI is here to stay, the solution according to Barnard lies in design. “The way that an object is designed determines how we interact with it,” she said. Most of today’s tech products are built to capture and hold attention. However, that approach is fundamentally at odds with children’s needs, and instead, Barnard advocates for bounded, intentional experiences. “We should offer curated, boundaried experiences,” she said, rather than chasing “endless possibilities” which are difficult to control safely.
Barnard’s Mindful AI concepts reflect this philosophy. These include tools that generate drawings for kids to colour, devices that prompt family conversations, and systems that encourage creativity without constant interaction. “These are deliberately limited,” she explained, adding that AI should “add a sense of magic… but in a limited way.”

Story continues below this ad

Also Read | Why hundreds of researchers want a pause on online age verification
Even as the AI toy market expands rapidly, with companies racing to add more features and capabilities, Barnard warned that this race could backfire. “The market is flooded with products competing for attention… risking overstimulation and, perhaps more concerning, attachment,” she said. She argued that if left unchecked, the consequences could mirror those seen with social media such as addiction, reduced attention spans, and eventual regulatory crackdowns. “We may well see… bans put in place,” she cautioned.
When asked who should be responsible for ensuring children’s safety in AI products – developers, regulators, or parents? Responsibility, Barnard argued, cannot fall on parents alone. “It can’t be on the parents because they don’t understand,” she said, adding that developers, who understand the technology best, must take the lead by working alongside regulators to establish clear standards. Transparency is equally critical. “If a toy collects voice data, they need to say so and explain where it goes,” she added.
Barnard is not against AI; in fact, she sees enormous potential if applied thoughtfully. The goal is not to banish AI from children’s lives but to ensure it helps in their development rather than replacing vital human experiences. “Children growing up right now are AI natives. Our job is to ensure this intelligence supports their creativity, calm, and agency… rather than fostering dependency.”

 

How useful was this post?

Click on a star to rate it!

Average rating / 5. Vote count:

No votes so far! Be the first to rate this post.

Related Articles

Leave a Comment