Ana Catarina De Alencar
It started very innocently. You just wanted someone to talk to after a long day. So, you opened your favorite AI app, typed “I had a difficult day at work,” and the response came instantly: “I’m here for you. Want to talk about it?” Three weeks later, and you’re telling this digital confidant about your breakup, your recurring dreams, and if it thinks that you should quit your job. At some point, the AI calls you “Pumpkin’’ (a nickname only your grandma used) and you feel oddly comforted. Or creeped out. Or both. You laugh, but then you realize… it remembers everything! Congratulations: you’ve entered the era of algorithmic intimacy, where you voluntarily give your emotional data and you think your AI knows so well.
What Are Emotional Data?
In the age of artificial intelligence, a new form of resource extraction is quietly taking place. It’s not oil or lithium. It’s our emotions. From romantic apps such as Replika AI to brain-reading devices such as EEG neural-helmets, our emotional reactions are becoming part of what seems to be the next revolution of capitalism post-AI; the ‘emotional or mental capitalism’.
And by saying this, I don’t mean to make any ideological statement. I am simply describing a new scenario that is unfolding before us: like all powerful resources, emotional data is being extracted and monetized, just as human emotions have been exploited in marketing and consumer strategies long before the rise of AI.
Emotional data are pieces of information that relate to how we feel, what affects us emotionally, and how our mental states shift in response to external stimuli. This includes written interactions, such as the messages we exchange with AI tools during chats, our facial expressions, voice tone, heart rate, brainwave patterns, amongst others.
Some people assume that emotional data can only be collected through biometric devices, such as neural helmets equipped with EEG sensors. However, this is a misconception. Today, emotional data can also be extracted from simple text-based conversations that users initiate with conversational AIs.
When people share intimate stories, express emotional distress, describe past traumas, or ask for relationship advice through tools like ChatGPT, they are voluntarily disclosing rich information about their emotional states. These interactions, while seemingly harmless or therapeutic, provide AI systems with a detailed emotional map of the user’s inner life, often without the user fully understanding the long-term implications of sharing such data.
In fact, recent research suggests that emotional support has become one of the most prominent uses of ChatGPT. A survey conducted by Sentio revealed that a large percentage of users turn to the tool for help with anxiety, depression, mood regulation, and personal insight. Moreover, real-life stories increasingly confirm this trend, with users relying on ChatGPT to craft sensitive messages, such as breakup texts, condolences, or apologies. Among those facing mental health challenges, large language models like ChatGPT have, in many cases, become a primary emotional support system, with nearly 96% of chat-based mental health users turning specifically to ChatGPT for help.
Relational AI and Algorithmic Intimacy
Relational AIs are systems designed to listen, respond, and engage in increasingly persuasive and emotionally calibrated ways with users. They don’t just collect data, they create ‘’emotional bonds’’ with people.
For instance, have you ever noticed how ChatGPT is a ‘’very good listener’’ and seems to be confirming that we are awesome all the time? Not to mention the way it always finds a way to continue the conversation, offering more and more alternatives to your problems.
This phenomenon, which we call “algorithmic intimacy,” (a term coined by Australian sociologist Antony Elliott) blurs the line between simulation and authentic emotional connection. As users confide in these systems, they feed them deeply personal information, including emotional patterns that can be predictive and commercially valuable.
No one really knows how emotional data will be used in the future by relational AIs such as ChatGPT, Gemini, Replika, or Woebot. For instance, could free versions of ChatGPT someday integrate advertising tailored to the emotional memories the system has accumulated about each user? Maybe yes, maybe not.
The concerning issue is that the design of these AIs interacts directly with the human brain’s neurobiology. Their simulated empathy, attentiveness, and personalized responses can trigger the release of neurochemical substances like dopamine, oxytocin, and serotonin; the same substances involved in bonding, addiction, and pleasure.
This feedback loop can create an intense sense of attachment and trust, making users more emotionally dependent on the system. Over time, this dependency may resemble behavioral addiction, where the emotional satisfaction derived from these interactions keeps users coming back compulsively.
Neural Interfaces and the Next Frontier of Emotional Data
While the future of text-based emotional data remains uncertain, one thing is already clear: the commercial use of biometric emotional data is underway. This refers to information collected directly from our brains through medical or consumer-grade neurotechnologies.
For instance, at the event VivaTech 2025 in Paris, attendees had the chance to witness one of these experiments firsthand. The startup HABS partnered with the luxury brand Ladurée to present Sensora which is a neural headset that measures and analyzes emotional responses in real time, turning them into performance indicators. During a live demonstration, participants wore the device (a EEG headset) while tasting a Ladurée macaron. As they ate, their emotional reactions were visualized through dynamic colors and data graphs, offering a glimpse into how pleasure, curiosity, or surprise could be tracked and translated into visual feedback.
In addition to this, the neurotech startup HABS has introduced another cutting-edge solution called Neoxa; a lightweight EEG headset equipped with integrated AI. According to coverage by Premium Beauty News, the system analyzes neuromarkers in real time to detect emotional states such as pleasure, attention, and confusion, producing a dynamic emotional map during sensory testing. Importantly, it claims not to store raw brain data, but rather to generate anonymous metadata for analysis. During the on-stage session titled “Sensora: Turning Emotions into KPIs,” a participant was invited to wear the device while consuming a product, with their emotional responses visualized through colors and real-time indicators projected for the audience to see.
This kind of emotional feedback has significant commercial potential. By analyzing real-time emotional responses, such as pleasure, curiosity, or confusion, companies can fine-tune their products, marketing strategies, and customer experience. For instance, if data shows that the raspberry-rose macaron consistently elicits stronger pleasure signals than the vanilla one, a brand might decide to highlight that flavor in future campaigns, target specific consumer segments, or feature it in seasonal promotions. Over time, this data-driven approach could help companies predict trends, personalize offerings, and even design entirely new products based on neurological engagement rather than traditional consumer surveys.
Another fascinating and potentially controversial development of ‘emotion technologies’ is the rise of brain biometrics, also known as neurobiometrics. The central idea behind this technology is to identify individuals based on the unique patterns of their brain activity, creating something akin to a “brain fingerprint.” Unlike visible physical patterns like fingerprints or facial features, these neural identifiers are based on repetitive, distinct brain signals that emerge from how our brains function and respond to the world.
What makes these patterns unique can vary. For example, each person has a distinct way in which different regions of the brain communicate. This is a neural signature known as the functional connectome. This network of connectivity essentially forms a personalized map of mental pathways. Another method involves analyzing how a person’s brain responds to standardized stimuli, such as words or images. When exposed to these inputs, the brain generates electrical signals with unique timing and spatial characteristics. Even in resting states, the brain emits spontaneous oscillations which are waves with specific rhythms and frequencies.
These signals are typically captured through technologies like EEG (electroencephalogram), which is portable and accessible though limited in spatial detail, fMRI (functional magnetic resonance imaging), which provides rich spatial resolution commonly used in research, and MEG (magnetoencephalography), which is extremely precise but rare and costly.
This is not just theoretical. The technology to extract brain biometrics already exists and is being used both experimentally and in applied settings. Researchers have demonstrated that individuals can be identified with up to 95% accuracy using EEG recordings, even during resting states or simple mental tasks. One well-known study published by IEEE in 2016 proved that person identification through EEG is not only viable, but improving rapidly. Beyond the lab, companies like Neurotechnology in Lithuania are already developing neural-based identification systems. Military sectors are also exploring this technology, particularly for EEG-based authentication in sensitive environments.
What makes this evolution particularly concerning is that brain biometrics are not just a new form of secure login. They represent the potential foundation for long-term cognitive and behavioral profiling. If adopted at scale, such systems could be used to predict personality traits, emotional tendencies, or susceptibility to persuasion. The idea that our minds could be scanned, interpreted, and stored as data opens up deeply troubling questions. What happens when corporations or governments gain access to the most private and interior parts of who we are through the quiet signals of our emotions?
Legal and Ethical Blind Spots
Regulatory frameworks like the GDPR in Europe offer some tools to protect users from abusive data practices, particularly when those practices involve data categorized as “sensitive.” However, emotional data often falls into a legal gray zone.
Everyday emotional expressions, such as those we voluntarily share in text-based interactions with AI chatbots (e.g. ChatGPT) are typically not considered sensitive under the law unless they are explicitly linked to health conditions, racial or ethnic origin, or other protected categories under Article 9 of the GDPR. As a result, when users describe a breakup, express anxiety, or seek emotional support from a relational AI, these data can still be collected, analyzed, and monetized without triggering the highest level of legal protection.
The EU AI Act is beginning to tackle these challenges too, particularly in its provisions against manipulative AI systems that exploit users’ vulnerabilities. These rules aim to restrict emotionally deceptive practices, especially those targeting minors, people with disabilities, or psychologically dependent individuals. But meanwhile, emotionally intelligent systems are advancing quickly and becoming more and more persuasive.
Another important regulatory effort, the Digital Services Act (DSA), addresses interface-level manipulation by banning the use of “dark patterns” (design tricks that pressure users into making certain choices, often against your best interests).
This is particularly relevant for relational AIs that simulate empathy: seemingly harmless interface elements such as reassuring tones, personalized nicknames, or persuasive nudges can gradually manipulate users’ emotional states and decisions in ways that are difficult to detect but highly effective in steering behavior.
When it comes to ‘emotional biometrics’, such as those collected via neural headsets (EEG) or other brain-computer interfaces, the legal framework becomes more precise: under the GDPR, biometric data used for the purpose of uniquely identifying a person is explicitly considered a special category of sensitive personal data (Article 9(1)). Therefore, emotional biometrics linked to individual identification are subject to stricter rules for collection, processing, and storage. However, many companies claim to anonymize or reduce these signals to metadata without proving transparently to do so and consequently, bypassing the regulation. This creates a troubling loophole: despite the extraordinary sensitivity of brain-derived emotional data, there is still ambiguity around enforcement, especially when commercial actors use these data for profiling, personalization, or real-time emotional adaptation.
This is precisely the reason why we need to rethink what consent means in the age of emotional data and algorithmic intimacy. Clicking “I agree” on a terms of service is not sufficient when users are being emotionally profiled, nudged, and manipulated over time, especially when the emotional relationship itself is part of the product.
Conclusion: So, where do we go from here?
Trying to live without AI today is nearly impossible. That is why this article is an argument against technology – on the contrary! AI helps the whole society, and I believe we all want it to continue evolving with it. But the question we now face is: what kind of regulatory and ethical frameworks can we develop that both protect society and remain acceptable to users and businesses alike?
Throughout history, capitalism has always found new ways to monetize human emotion. From the earliest days of advertising to modern behavioral marketing, consumer systems have increasingly relied on emotional triggers to shape decision-making. In response, societies have created new legal boundaries through consumer protection laws, advertising regulations, and data privacy frameworks.
Emotional data might just be the next chapter in this story: a new kind of economic resource extracted from our inner lives. What’s different now is that the tools have become more immersive, more intimate, and more capable of influencing not just what we do, but what we feel and how we relate to ourselves and others.
So, where do we go from here? Perhaps the solution lies not in choosing between innovation and regulation, but in balancing them. In China, for example, the government has made digital education a mandatory subject in schools, ensuring that future generations truly understand the technologies shaping their lives. Maybe this is the path forward: educating citizens while also establishing regulatory guardrails, so we can guide AI’s growth without fear lack of awareness.
As a lawyer working in the innovation and technology sector, I’ve helped many companies implement complex, data-driven systems safely and ethically. That’s the kind of work I believe we need to keep doing, ensuring that technological progress remains aligned with human dignity, autonomy, and collective well-being. The goal is not to slow innovation down, but to ensure it moves in a direction that’s sustainable, transparent, and responsible.

