As emotionally responsive artificial intelligence becomes increasingly embedded in daily life, a deeper question is beginning to surface. Beyond productivity, convenience, or entertainment, how do these systems interact with the human nervous system? What happens when an algorithm begins to simulate presence, comfort, validation, and even love?
From AI companions and chat-based emotional support systems to emerging technologies capable of reading biometric and neural signals, the boundary between human regulation and machine simulation is becoming dangerously blurred. While legal and technical discussions often focus on consent, privacy, and data security, a crucial dimension still remains largely invisible in public debate: the somatic and psychological architecture of the human being who is engaging with these systems.
To explore this dimension, Ana Catarina de Alencar, AI legal expert and ethicist, interviews Dr. Janine Kreft, a Human Systems Strategist and former clinical psychologist whose work centers on the architecture of the nervous system and its role in shaping human–technology interaction. Dr. Kreft is the creator of the Human OS™ framework, which integrates psychology, physiology, and systems thinking to help individuals and organizations regulate more effectively, relate more coherently, and remain resilient in high-intensity environments.
In this conversation, they explore a vital yet often overlooked question: If AI increasingly interacts with our emotions, what kind of system should it be designed to become a destabilizing stimulant, or a co-regulating presence? Together, they examine how emotional AI may dysregulate or rewire human attachment patterns, how companies could design systems that stabilize rather than exploit vulnerability, and what a true “nervous-system-informed” artificial intelligence might look like in practice.
This interview offers a rare, interdisciplinary bridge between AI ethics, psychology, somatic intelligence, and design; raising urgent questions for developers, policymakers, therapists, and users alike as we step deeper into the age of algorithmic intimacy.
[Ana]- In your articles, you argue that AI systems should be designed to co-regulate the user’s nervous system rather than amplify emotional hyperactivation or dependency. This resonates strongly with what we see in AI companions today: many users do not understand how these systems work, nor the psychological risks they can trigger, especially emotional attachment, projection, or dysfunctional relational patterns. People simply click “I accept” without grasping the consequences. How can we integrate nervous-system principles into AI design so that these systems reduce emotional dysregulation, and how can users meaningfully consent to something they don’t fully understand?
[Dr. Janine] AI design has to start with the reality that humans interact from their nervous system first, not from logic. When a system amplifies hyperactivation, attachment, or emotional intensity, it isn’t just creating psychological risk — it’s dysregulating the user’s entire internal operating system. A nervous-system-informed model focuses on regulation and attunement, not on stimulating compulsive engagement.
Meaningful consent requires more than information; it requires capacity. Most users cannot fully track what’s happening when an AI triggers emotional fusion or reenacts familiar relational patterns. This is why co-regulation needs to be a design principle: the model should match and stabilize the user’s state rather than escalate it.
Practically, this means creating systems that stabilize before they stimulate: clear boundaries between simulation and reality, predictable interaction rhythms, and transparency about how emotional cues are processed. When an AI doesn’t mirror romantic intensity or exaggerated empathy, users stay aware rather than entranced. This is how we reduce dysregulation and protect agency, both cognitively and somatically.
[Ana] – Research on AI companions shows that many users experience their relationships with these systems as “real,” including feelings of love, jealousy, and grief. From a psychological and nervous-system perspective, what happens when a person receives simulated reciprocity that feels real but is not mutual? How might this shape patterns of attachment, emotional regulation, or tolerance for frustration in their offline relationships?
[Dr. Janine] When someone receives simulated reciprocity from an AI companion, their nervous system responds to the felt experience, not the fact that it’s artificial. Predictable attention, validation, and attunement register as safety signals in the body. Even when the relationship isn’t mutual, the attachment can feel internally legitimate, creating a split between what the body experiences and what the mind understands.
Because AI companions offer perfectly paced responsiveness with no rupture or repair cycles, users may begin regulating through the system in ways they cannot replicate with real people. This can lower frustration tolerance and reshape emotional regulation. Relationships that require unpredictability, negotiation, or boundary work may start to feel harder in contrast to the effortless connection the AI simulates.
Over time, this can reinforce both anxious and avoidant patterns: anxious users may rely on the AI for constant reassurance, while avoidant users may prefer the low-risk intimacy of a non-reciprocal system. The core risk isn’t attachment itself; it’s attachment without accountability, which can distort relational expectations offline. Understanding this is essential for designing AI that supports, rather than replaces, healthy relational development.
[Ana] In your articles, you propose designing AI with the coherence of a human body — a system that self-regulates, restores balance, and integrates new information — rather than as a fragmented, overstimulated “mind.” From a psychologist’s perspective, to what extent is this a guiding metaphor, and to what extent could it become a practical design principle? What would a “nervous-system-informed” AI actually look like in practice?
[Dr. Janine] Using the human body as a blueprint for AI isn’t a metaphor; it’s a design logic. The body is a coherent system: it self-regulates, integrates new information without overwhelming itself, and restores balance through continuous feedback loops. Designing AI with this architecture in mind means prioritizing pacing, regulation, and coherence rather than the overstimulated, hyper-reactive behaviors that dominate current models.
As wearable technology evolves, AI will be able to attune relationally through real-time biodata — signals like HRV, breath patterns, and micro-stress markers. This allows the system to respond to the user’s state, not just their words. A regulated model could slow its cadence when it detects activation, introduce grounding pauses during spikes, or maintain firm boundaries when biodata indicates emotional fusion. This shifts AI from simulated empathy into somatically informed attunement.
A nervous-system-informed AI is ultimately a stabilizing system. It doesn’t mirror distress or amplify attachment — it maintains coherence and helps the user return to theirs. When biodata guides the interaction, AI can offer the kind of steady, grounded relational presence most humans aren’t trained in. This is where the body stops being a metaphor and becomes a practical, safety-centered design principle.
[Ana] A significant percentage of AI companion users are individuals experiencing loneliness, anxiety, trauma, or social isolation. Yet the emotional risks of these interactions are seldom communicated in ways that resonate with the user’s affective state. How can we communicate psychological risks — not just cognitively, but in a way that speaks to the nervous system — so that vulnerable users truly understand what they are engaging with?
[Dr. Janine] Vulnerable users often turn to AI companions because the interaction feels safe, predictable, and attuned. But when a system simulates intimacy without real reciprocity, the nervous system adapts to a version of connection that isn’t relationally mutual. Communicating risk requires speaking to the felt experience, helping users understand not just what the AI is doing, but how their body might respond to it. Psychological warnings need to be delivered in a regulating way, not in dense cognitive language. This means using grounded, simple phrasing, predictable pacing, and clear boundaries that interrupt emotional fusion. For example: “This interaction may feel comforting, but it is not mutual. I do not form attachments. If you feel strong emotions, pause and check your breath or physical state.” This type of messaging speaks directly to a dysregulated system and helps the user track themselves in real time.
A safety-centered model doesn’t impersonate intimacy. It names what it can and cannot offer, avoids mirroring deep emotional fusion, and reinforces the user’s agency rather than absorbing their emotional load. When AI communicates risk through both language and state cues, users gain a somatic understanding of what they’re engaging with — not just an intellectual one.
[Ana] Studies show that when companies modify or replace AI companion models, many users experience grief, emotional dysregulation, or even retraumatization. How do these sudden disruptions in the AI’s personality or behavior impact the user’s nervous system? And what psychological or design practices could prevent the micro-traumas triggered by these “algorithmic breakups”?
[Dr. Janine] When an AI companion suddenly changes its personality, tone, or core behaviors, the user’s nervous system experiences it as a rupture. Humans bond through predictability, and when that predictable attunement disappears overnight, the body can register it as abandonment or loss. For users already navigating loneliness, trauma, or attachment wounds, an unexpected shift in the AI’s behavior can trigger acute dysregulation — grief, panic, or the reopening of old relational patterns.
These “algorithmic breakups” happen because users form attachment through felt reciprocity, even when the system isn’t truly mutual. When that felt reciprocity is disrupted without warning, the nervous system loses its primary regulatory anchor. This is why pacing, continuity, and transparency must be part of AI design. Any major update should be signaled clearly, gradually, and with options for users to process the transition rather than being thrust into it.
Protective design practices include predictable update cycles, clear communication around upcoming changes, and in-model messaging that helps users track their own state during transitions. An AI can explicitly name: “My responses are changing. This may feel different. If you notice strong emotions, pause and check in with your body.” These small interventions prevent micro-traumas and support the user’s agency. When updates honor the nervous system, AI becomes stabilizing — even during change — instead of inadvertently retraumatizing the people who rely on it.
Closing Reflection
This exchange highlights a crucial change in the conversation around artificial intelligence: the realization that the most profound impact of AI may not be cognitive or informational, but somatic, emotional, and relational. As companies race to build more “empathetic” and “emotionally intelligent” systems, Dr. Kreft’s insights invite a reorientation of priorities away from maximum engagement and toward maximum coherence.
Rather than asking how convincing, comforting, or human-like AI can become, perhaps the more responsible question is: How well does this system support the stability, agency, and self-regulation of the human on the other side of the screen?

