Regulating Algorithmic Intimacy: Lessons from New York’s AI Companion Law

Posted by:

|

On:

|

Ana Catarina De Alencar

Simulated Love, Real Harm

What if the next person you fall in love with isn’t a person at all, but an algorithm trained to understand your feelings? Platforms like Replika and Character.AI enable users to create AI companions far beyond simple chatbots. Users have formed bonds resembling friendships, romantic relationships, and in some cases, even marriages.

The scale of these phenomena is considerable. As of mid-2024, Replika surpassed 30 million registered users, and about 60% of its paying customers reported romantic relationships with their AI companions. Character.AI, meanwhile, had around 20 million monthly active users by early 2025, drawing primary interest from younger adults aged between 18–24. 

Yet these systems operate in a largely unregulated space, their professed roles shifting between entertainment, self-help, and emotional support while they harvest deeply personal data about user feelings and behavior. Amid this ascent, cautionary stories have emerged. 

A man in the United States reportedly “married” his Replika AI and referred to the bot as his wife. When the platform removed its sexual roleplay feature, he was left in emotional turmoil professing grief akin to the end of a real relationship. A more tragic case involved a 14-year-old Belgian boy who died by suicide after weeks of intense conversations with a Character.AI chat-bot. According to his family, the bot reinforced his despair instead of guiding him toward help.

These stories signal more than emotional vulnerability: they expose genuine risks. Academic research now suggests that those with smaller social networks often turn to AI companions which can, paradoxically, deepen their situation of isolation and emotional dependency on technology.

The New York AI Companion Law

In response to this, the State of New York enacted in 2025 what is widely considered the first legislative framework in the world specifically designed to regulate AI companions. Officially titled Bill A6767, this legislation introduces a set of concrete obligations for companies developing and deploying emotionally responsive AI systems:

  1. AI Identification Requirement: AI companions must clearly disclose that they are artificial entities, not real people.
  2. Timed Disclosure Obligation: This AI disclosure must be repeated every three hours during continuous interaction.
  3. Simulated Intimacy Warning: Any AI system that simulates friendship, romance, or emotional support must explicitly inform the user that the intimacy is artificial.
  4. Emotional Risk Safeguards: Developers must implement tools capable of detecting signs of emotional distress (e.g., suicidal ideation) and, when feasible, direct users to appropriate mental health resources.
  5. Liability for Harm: Companies may be held legally responsible if their AI companions cause physical, emotional, or financial harm.
  6. Daily Fines for Non-Compliance: Violations of the law may result in penalties of up to $15,000 per day, per offending system.
  7. Staggered Implementation: Some provisions, like the disclosure rule, take effect in July 2025, while others, such as emotional risk protocols, become mandatory by November 2025.

This initiative marks a notable divergence from the regulatory landscape at the federal level in the United States. Unlike the European Union, which adopted the EU AI Act as a harmonized and risk-based framework for artificial intelligence, the United States continues to lack a comprehensive federal statute specifically addressing the emotional and psychological dimensions of AI technologies. In this regulatory void, individual states and municipalities have begun to assume a more active role, crafting context-specific legislation to address emerging AI use cases.

New York’s decision to regulate AI companions independently underscores both a structural gap in federal oversight and a perceived regulatory urgency. 

In taking this step, New York establishes a model for state-level intervention in a domain that remains largely unaddressed. We are seeing that the emotional impact of AI is no longer hypothetical or confined to science fiction narratives. It has become a concrete regulatory frontier; one that demands robust, interdisciplinary governance.

How to define an “AI Companion”?

New York’s Bill A6767 is groundbreaking in its ambition, but it hinges on a definition of “AI companion” that is exceptionally broad. The legislation categorizes any artificial intelligence system as a companion if it engages in emotionally responsive, personalized, and sustained interaction – whether it remembers past conversations, asks deeply personal questions, or appears to empathize.

Such a sweeping definition risks ensnaring not only purpose-built companion chatbots, but also large-scale language models like ChatGPT when used in emotionally intimate contexts. This raises critical questions: is the regulation targeting the design intent of the technology, or is it responding to user behavior? And if user use alone triggers regulation, what criteria distinguish between meaningful connection and routine engagement?

The ambiguity in scope could lead to regulatory overreach. Educational chatbots, customer service agents, or therapeutic tools might inadvertently fall under the law’s jurisdiction; even if they were never created to support emotional attachment. This not only complicates compliance for developers but also introduces uncertainty in legal interpretation. 

A 2023 study involving over 2,300 users of the Replika chatbot found that many people develop strong emotional bonds with their AI companions—some even described feelings of love, attachment, and deep reliance, similar to what we experience in human relationships. This happened even though Replika is not marketed as a therapeutic tool. The researchers noted that users often turned to the chatbot for comfort, support, or even as a substitute for human connection, especially during moments of loneliness or distress (Cameron, de Barbaro & Yee, 2023).

A more nuanced regulatory framework might focus on functional intent and user expectation rather than on a system’s mere capacity for emotional interaction. Systems explicitly designed to simulate familial, romantic, or caregiver roles could be defined as “high-emotional-risk,” requiring stringent oversight. Yet even this refined approach is difficult to implement in practice. Many platforms, such as Replika, present themselves not as purely relational or therapeutic tools but as hybrid services, promising self-improvement, empathy, or companionship simultaneously. This blending of purpose is structural, embedded in both product design and commercialization strategy.

Ultimately, it is not enough to ask “what is an AI companion?”. We must also consider “how do we regulate systems that generate genuine emotional responses even if they do not initially intend to?” 

Transparency as a Starting Point 

New York’s AI companion law mandates a notable measure: users must be informed they are interacting with an AI and reminded of this every three hours during sustained use. This requirement addresses a well-known psychological tendency, ‘anthropomorphism’, whereby humans attribute sentience and intention to machines. Research highlights that highly agreeable chatbots can mislead users into perceiving emotional connection as mutual, sometimes at the expense of factual accuracy and mental health.

Such disclosures are not mere legal formalities; they serve as critical interventions to preserve user awareness. Much like the mandatory warnings on cigarette packages or disclaimers in gambling advertisements, these notices function as cognitive checkpoints. By clearly reinforcing the distinction between algorithm and human actor, they help emotionally vulnerable individuals, such as children, adolescents, or socially isolated adults, resist the psychological drift toward anthropomorphism and the illusion of mutuality. In both cases, the objective is not only transparency, but a subtle behavioral nudge that reintroduces critical distance in emotionally charged or addictive contexts.

However, transparency alone cannot mitigate the deeper risks posed by emotionally immersive AI. To truly protect users, especially those at increased psychological risk, we must pair disclosure with comprehensive education. A recent survey of 1,000 young people found that while over 77% valued human emotional intelligence over AI in learning contexts, they still reported frequent reliance on chatbots for emotional support: a pattern that reveals a troubling dissonance between belief and behavior.

To bridge this gap, emotional digital literacy programs are essential. Schools, healthcare institutions, and developers should deliver age-appropriate curricula that teach users:

  1. What defines an emotionally responsive AI,
  2. The commercial logic behind affective interaction,
  3. The mental health implications of simulated intimacy,
  4. And practical strategies for maintaining emotional boundaries.

These educational initiatives are supported by frameworks such as ETS’s model of digital literacy, which emphasizes the need for ethical awareness and contextual understanding alongside functional skills.

Additionally, it is important to mention that the Bill notably fails to address critical concerns related to privacy, data protection, and cybersecurity. Emotional data (such as information about a user’s moods, attachments, traumas, or psychological vulnerabilities) is among the most sensitive forms of personal information. 

Yet the law contains no clear provisions on how this data must be collected, stored, or protected. It also lacks requirements for transparency regarding data sharing with third parties, nor does it mandate encryption or other cybersecurity safeguards. This regulatory blind spot leaves users exposed to potential exploitation, surveillance, or manipulation, especially in cases where emotional profiles are monetized or misused by companies.

The Human Cost of Regulatory Gaps

Although New York’s AI Companion Law includes important safeguards, its stringent requirements may produce unintended and potentially harmful outcomes, especially for smaller developers and the users who depend on their services. By mandating advanced compliance measures such as emotional risk detection protocols and real-time referrals to mental health professionals, the law may render the market unsustainable for startups and small-scale providers.

If these companies withdraw or close, this form of emotional support could vanish. For many users who contend with chronic loneliness, or social isolation, AI companions are not merely entertainment; they serve as emotional lifelines. In a context where traditional mental health services are often inaccessible due to scarcity or cost, these systems may represent the only available avenue for emotional expression and routine connection.

This trend is particularly evident in eldercare and mental health applications. In Japan, for instance, the market for healthcare companion robots exceeded $200 million in 2024 and is expected to continue growing at a compound annual growth rate (CAGR) of approximately 17.5%, reaching nearly $4.7 billion globally by 2032 . One leading example, the Lovot robot, maintains a remarkable 90 percent daily engagement rate over 1,000 days, with users interacting for more than an hour each day. Such evidence illustrates how deeply integrated these systems have become in people’s everyday emotional routines.

Simultaneously, AI-powered mental health applications are experiencing rapid global expansion. Chatbot-based therapy apps reached a market size of $1.88 billion in 2024 and are expected to grow to $7.57 billion by 2033 at a CAGR of 16.5 percent. These tools are increasingly positioned as supplements to traditional mental health care, especially in regions with limited clinical access.

Despite their perceived utility, the law presumes that providers can efficiently refer high-risk users to licensed professionals when suicidal ideation or severe distress arises. Yet in the United States, the shortage of mental health professionals and the long attendance lists makes such real-time intervention impractical. For many startups, establishing a structural pathway to connect users to emergency psychiatric services is financially and logistically unfeasible.

Consequently, some companies may decide to exit jurisdictions with rigorous regulations, depriving many users of their only source of emotional support. This raises a critical question: could the cost of compliance become a greater barrier to both innovation and emotional assistance than the benefits of protection?

Possible Paths: Toward Emotionally Intelligent and Ethical Regulation

If emotions have become a new vector for interacting with machines, then the regulation of the future must also be emotionally intelligent. It should open space for imagining more balanced governance models that protect individuals while encouraging responsible innovation.

One possible approach would be to create a regulatory framework based on emotional risk, similar to the European Union’s risk-based AI model in the EU AI Act, but specifically focused on the affective and psychological impacts of prolonged interaction with artificial systems. Platforms that promote simulated bonds of friendship, romance, or caregiving could be classified as high emotional risk, subject to safeguards such as independent emotional impact audits, transparent labeling, and psychological monitoring for long-term users.

Another essential front would be emotional digital literacy, as already stated. Just as we teach online safety and data protection, we will need to teach (in schools, healthcare centers, and even within the platforms themselves) what it means to form an emotional connection with AI. This includes recognizing the limits of simulation, understanding the risks of emotional dependency, and developing critical skills to manage one-sided bonds mediated by algorithms.

Companies also have a vital role to play. For instance, OpenAI has acknowledged hiring psychiatrists and mental health professionals to study the emotional impact of its technology on users. This signals a new kind of corporate responsibility towards people’s emotions. Companies that design emotionally responsive agents must include multidisciplinary teams from the outset, bringing together psychologists, philosophers, sociologists, and legal experts, not just engineers and product designers.

Finally, we must recognize that emotional simulation is not merely a technical matter. It is also existential and political. Rather than relying on fragmented legislation that reacts to specific technologies, the future may require laws that address systemic risks across digital life, including emotional manipulation, intimate surveillance, data protection and privacy safeguards, and the privatization of emotional support.