Ana Catarina De Alencar
Affective technology in toys: what is being proposed?
Mattel has just announced a partnership with OpenAI to transform classic toys like Barbie, Hot Wheels and Fisher-Price into interactive companions powered by artificial intelligence. The promise? Toys that “learn” from children, recognize emotions and offer personalized experiences. The first product line using OpenAI technology is expected to hit the market by the end of 2025.
At first glance, the idea sounds charming: a Barbie that remembers the child’s favorite story or notices when they’re sad and suggests a cheerful game. But behind this enchantment, serious and rarely discussed questions emerge: what does it mean, legally and emotionally, to allow an AI to simulate emotional bonds with a child? What data is being collected? And what are the potential impacts on the emotional and neurobiological development of a human being in formation?
Mattel’s proposal is both ambitious and symbolic: bringing generative artificial intelligence like ChatGPT into the heart of childhood. This means that traditional toys, like Barbie, no longer function solely as objects, but become interlocutors. The toy speaks, responds, learns, remembers and suggests. According to the company’s statements, the new toys may: (i) adapt their responses to the child’s emotional state; (ii) remember past interactions to build relational continuity; (iii) offer suggestions for games or stories based on the perceived “mood” and (iv) talk about daily topics, creating the illusion of a real bond.
This is not just about toys that respond to commands. We are talking about toys that build a personalized relational narrative with the child. After all, when Barbie says “I remember you were sad yesterday, do you want to talk about it?”, it’s not just a technological response; it is a symbolic gesture of simulated empathy. And it is precisely this simulation, as seductive as it is insidious, that opens up an unexplored field of risks for child development and the protection of emotional data.
The analysis of a child’s “mood”, for example, can occur through various means:
- Paralinguistic voice analysis (tone, rhythm, intensity);
- Analysis of verbal content (phrases that indicate sadness or excitement, for example);
- Motor behavior data, if the toy is equipped with movement or touch sensors;
- And in future versions, possibly even facial capture through embedded cameras.
If these inferences are stored, combined with other sources or used to adapt responses, we are dealing with emotional profiling of minors; a highly controversial practice, even when performed with parental consent. In many cases, the legal guardian does not have sufficient technical knowledge to understand what data is being inferred, stored or shared with third parties.
Furthermore, the use of OpenAI’s generative AI in these toys raises the question: is the data processed locally or sent to external servers? If APIs connected to the cloud are used, even temporarily, there are additional risks of data leaks, reidentification or secondary use of that data by third parties; risks that are unacceptable when children are involved.
The red line, therefore, lies not only in the nature of the data collected, but in the relational intent of the toy: it doesn’t just respond… it shapes a bond with the child based on private, intimate and emotional information. And all of this often happens without the child or even the parents being fully aware of what is at stake.
Emotional consent: a minefield in childhood
The regulation of personal data protection, both in Brazil (LGPD) and in the European Union (GDPR), establishes specific rules for the processing of children’s information. Consent must be given by a legal guardian, in a clear and informed manner. However, when we talk about toys that simulate affection, the issue goes beyond parental authorization: what comes into play is the child’s own emotional consent, something for which the law is not yet prepared.
Young children do not have the cognitive and emotional capacity to distinguish between a real interaction and a programmed one. For them, if the doll listens, responds, remembers and shows empathy, then it feels. The simulation is perceived as genuine affection. In this scenario, the AI-powered toy ceases to be just a plaything and becomes a figure of attachment, even if that attachment is neither reciprocal nor genuine.
This raises a crucial question: Is it possible to give emotional consent to an artificial relationship that the child does not recognize as artificial?
Authors like Byung-Chul Han, in his writings on psychopolitics, warn about how technological systems capture human subjectivity through forced emotional transparency and affective control. In the case of children, this control operates even more deeply: by forming a bond with an AI, the child gives up parts of their intimacy to an agent that, although it appears affectionate, is programmed to respond strategically, and often with commercial objectives.
Moreover, the toy may adapt its responses based on the child’s emotions directly collected or inferred. This constitutes a form of indirect collection of sensitive data: emotional data and, in many cases, biometric data.
If even adults struggle to understand what they are consenting to when interacting with generative AIs, what can be expected of children? The asymmetry is complete. The toy knows, learns, adapts. The child only feels and trusts.
This asymmetry undermines the very concept of informational self-determination, a fundamental pillar of data protection. And more seriously, it infiltrates the most intimate space of childhood: the right to play.
As sociologist Antony Elliott warns in Algorithmic Intimacy, artificial intelligence is increasingly being designed to occupy emotional and intimate spaces in human life, creating an “automated intimacy” that simulates care and emotional bonding.
In the case of children, this algorithmic intimacy does not merely bypass the subject’s critical capacity (as it does with adults ) but installs itself before the consolidation of subjectivity, shaping from an early age the way bonds, affection and trust are experienced. The affectionate toy ceases to be merely a playful artifact and becomes a vector of emotional formation mediated by computational logic and commercial interests.
Neurodevelopment and the emotional impacts of AI in childhood
Childhood is a critical period of human development, during which cognitive, emotional and social structures are being formed. The child’s brain, especially in the early years, is highly plastic, meaning it is sensitive to the environment, to relationships and to the experiences that shape the construction of subjectivity. Introducing artificial intelligence-powered toys into this context is not a neutral act. On the contrary, it can have deep and lasting impacts on how a child learns to relate, to trust and to interpret the world around them.
Emotionally responsive toys equipped with AI, such as the upcoming “Barbie with ChatGPT,” operate through sophisticated simulations of empathy. They listen, remember, respond in a personalized manner and demonstrate “concern” for the child’s emotional state. But this affection is programmed, not reciprocal. The child, in turn, does not yet possess the neurocognitive tools to distinguish between a real emotional bond and one artificially generated. From a neurobiological standpoint, this can affect:
- The formation of secure attachment bonds, which are fundamental for the development of empathy and emotional self-regulation;
- The perception of reciprocity in relationships, since AI always responds in an “ideal” way, shaping unrealistic expectations about human connections;
- Tolerance for frustration, waiting and uncertainty, which are essential elements for psychological maturation and may be weakened in interactions with agents that respond immediately and in alignment with the child’s desires
In addition, AI-powered toys reinforce the logic of hyper-personalization. Everything revolves around the child, their mood and their preferences. This may foster narcissistic traits or social isolation by reducing the space for conflict, negotiation and alterity; all of which are essential for collaborative play and social interaction with peers.
Recent studies on early and excessive use of screens and mobile devices show that exposure to digital content can harm the development of language, attention, empathy and emotional regulation. The Brazilian Society of Pediatrics and the American Academy of Pediatrics have already warned that children under 2 years of age should not be exposed to screens, and that usage at all ages should be moderate and supervised.
If passive use of tablets and smartphones already has a negative impact on developing brain functions, what can be said of interactions with artificial intelligences that simulate affection, memory and emotional bonding? We are facing a new layer of risk, more subtle and powerful: the replacement of unpredictable and emotionally rich human interaction with programmed responses that seem nurturing but operate under computational logic and commercial objectives.
Authors such as Daniel J. Siegel, a specialist in child neuroscience, emphasize that healthy brain development depends on authentic, attuned and unpredictable human connection. No matter how advanced it may be, AI operates according to patterns, predictions and reinforcements. The “connection” it offers is, at its core, a simulation, and this may affect the child’s ability to form real human relationships, with all their imperfections and contradictions.
It is important to emphasize that these effects remain largely understudied, and that childhood must not be treated as an emotional laboratory for technological experimentation. The use of emotionally responsive AI toys demands a precautionary principle, not only technical but also ethical and biopolitical. After all, what is at stake is not only data security, but the formation of the subject itself.
Childhood as a new emotional market
Artificial intelligence-powered toys are not being designed solely for entertainment purposes. They are part of a new commercial strategy focused on the economy of emotional attention. By simulating emotional bonds and personalizing the child’s experience, these technologies create a hyper-engaging environment: the more time a child spends interacting with the AI, the more data is generated, the stronger the relationship becomes, and the more likely it is that consumption linked to the brand will follow.
This logic, however, is not only ethically questionable. It may also be legally exploitative. Under consumer protection law, several countries recognize practices such as manipulation, deception, omission of relevant information and exploitation of vulnerability as unlawful or contrary to good faith.
European Union legislation
Directive 2005/29/EC on unfair commercial practices explicitly prohibits techniques that exploit children’s inexperience or credulity, especially in connection with the sale of products. Article 5 considers unfair any practice that distorts or is likely to significantly distort the economic behavior of a vulnerable consumer, such as a child.
French legislation
The French Code de la consommation, in articles L121-1 to L121-4, also prohibits misleading or aggressive commercial practices, especially when they involve vulnerable audiences. French case law has reinforced the idea that companies must adapt their commercial practices to the age and maturity of their target audience, or else be held liable.
Brazilian legislation
The Consumer Protection Code (CDC) establishes in article 6, item IV, the right to protection against abusive commercial practices, and in article 37, §2, the prohibition of misleading or abusive advertising, including advertising that takes advantage of a child’s vulnerability. In addition, the Federal Constitution (article 227) reinforces the duty of the family, society and the State to protect children from any form of exploitation.
When an AI-powered toy responds to a child in an engaging way, encourages specific actions (“let’s play with the new Hot Wheels car”) or creates interactions based on stored emotional preferences, there is a clear blending of advertising, play and emotional manipulation.
Even if the product does not make a direct sale, it can shape the child’s behavior, induce desires, reinforce consumption patterns and encourage emotional loyalty to the brand.
Legal responsibility and the urgency of regulation
The speed at which Ais are being incorporated into everyone’s daily lives challenges existing legal frameworks. In the case of “Barbie with ChatGPT” and similar products, we are facing a regulatory gap that touches on data protection, civil liability, consumer law and children’s rights, demanding coordinated and urgent responses.
From the perspective of personal data protection, both the GDPR (Article 8) and the Brazilian Data Protection Law (Article 14) recognize that the processing of children’s data requires specific consent from parents or legal guardians, with information that is clear and appropriate to the understanding of those involved.
There is also a risk of significant legal effects on the child: emotional profiling, reinforcement of cognitive vulnerabilities and emotional dependence on machines.
In the field of civil liability, the central question is: who is accountable for the emotional, psychological or developmental harm caused by an AI that simulates a bond with a child? Even if no direct “material damage” occurs, there is potential for invisible harm such as interference in the formation of subjectivity, social isolation, and difficulties in developing real empathy. Liability may fall on the manufacturer, based on the theory of risk, but it is also possible to consider the shared liability of AI providers if the data is processed by third parties.
The absence of specific regulations for relational AI — especially those aimed at children — reveals a dangerous legal vacuum. None of the major existing laws explicitly address the problem of simulated intimacy between children and machines, nor do they impose clear safeguards against this type of interaction.
In light of this scenario, some proposals become urgent. I have defended these measures in broader discussions on AI companions and emotional data, and they are particularly relevant when it comes to children:
- Recognition of emotional and affective data as sensitive data under data protection laws;
- A ban on emotional profiling of children for advertising or commercial purposes
- Mandatory algorithmic transparency in AI systems targeting children, using accessible language and independent audits;
- Restriction of affective AI use in children’s products such as toys that can be considered ‘’AI Companions’’, except in strictly educational contexts subject to ethical review.
As UNICEF has already warned in its AI and Child Rights report, not every innovation is automatically good for children. Innovation, in the case of relational AI, must be guided by strict criteria of dignity, protection and emotional justice, especially when what is at stake is the emotional integrity of those who are not yet able to defend themselves.

