Inside the world of AI dating simulations and what they mean for human freedom, consent, and our emotional lives
This article explores emerging technologies in the growing field of algorithmic intimacy — systems that simulate, mediate, or even preempt human relationships through artificial intelligence. What once seemed speculative fiction is now entering the market. From startups like BlinDate, developing AI clones to simulate romantic relationships before users ever meet in real life, to broader social transformations, we are witnessing the quiet redesign of how affection, connection, and choice are structured.
Some of these developments mirror, or perhaps inspired, the narrative logic of the Black Mirror’s Hang the DJ, a widely discussed episode that imagines a matchmaking system where digital versions of individuals are tested across thousands of simulated scenarios to determine long-term compatibility. While fiction tends to dramatize, the underlying questions it raises are increasingly urgent: What happens when love is optimized? What does consent mean in simulated emotional experiences? And how is human autonomy redefined when AI systems begin to shape our most intimate decisions?
In this article, we examine how these technologies are reshaping not only the landscape of dating, but also the legal, ethical, and psychological frameworks we use to understand freedom, responsibility, and ourselves.
The illusion of choice: determinism vs digital destiny
Technology is very convenient. It tells us when to get up for work, suggests a new podcast to listen to, reminds us to recharge our toothbrush, and even suggests a great match for our love life. But at what point does that convenience evolve from making our life easier to eventually fostering a blind trust that technology really has our wellbeing as its end goal? At what point do we start to question that blind trust?
In the Black Mirror episode, Hang the DJ, the two main characters, Amy and Frank, use an AI matchmaker app called The System to help them find their ultimate match. From a psychoanalytic perspective, the episode invites a deeper reflection on the illusion of agency in an era dominated by algorithmic decision-making. Even before the big plot twist near the end, Amy and Frank repeatedly defer to The System — even though there’s an expiration date for each love match, ranging from a few years down to a mere couple of hours. Sometimes they cooperate with resignation, other times with a tinge of anxiety. But below their sea of emotions and considerations, their choices are already set, framed by an external authority that presents itself as neutral, benevolent, and rational.
In psychoanalytic terms, this resembles what Lacan would call the ‘’Big Other’’; the symbolic authority that structures our desires and beliefs. The System in Hang the DJ is not just a dating tool; it becomes the superegoic voice shaping Amy and Frank’s sense of what is allowed, possible, and real. Their eventual “rebellion” is less an act of free will and more an act of jouissance, a transgressive escape from the scripted logic imposed upon them.
Ironically, the twist, which reveals that their rebellion was itself part of a simulation, closes the loop. Even resistance was anticipated. What the characters perceived as “choice” was merely one outcome within a controlled probabilistic system.
Relationship simulations: outsourcing our emotional lives
In the plot twist, the writers reveal that Amy and Frank were not real people at all, but rather digital clones, simulations running inside a system designed to test their compatibility. The version of them that we as viewers became emotionally invested in, were actually avatars within a probabilistic model, iterating across 1,000 scenarios until a statistical confidence threshold was reached. The real Amy and Frank, who lived outside the simulation, only saw the matchmaker’s outcome: a 99.8% compatibility score. From their perspective, the emotional drama and rebellion we witnessed never occurred.
And yet, the emotional stakes for the simulated versions were real to them. They loved, they hurt, they questioned authority, they resisted. Which raises an unsettling philosophical and legal question: what happens when beings that feel real or even are real in some meaningful way — are treated as disposable instruments for someone else’s benefit?
What’s perhaps more disturbing is that this is not even near-future science fiction. A few weeks ago, the AI Legal Expert Ana Catarina De Alencar, interviewed the founders of BlinDate, a startup currently developing a dating app in which users delegate the initial stages of romantic interaction to their AI clone: a digital persona trained on personal data, communication style, preferences, and inferred psychological traits. These clones engage in thousands of simulated relationship scenarios with others, generating a compatibility assessment intended to guide the user’s decisions in the real world. The project seeks to minimize the emotional labor and uncertainty traditionally associated with dating by offering what the founders describe as a “pre-filtered” emotional landscape. (More about this here)
While innovative, this model invites a series of complex and unresolved questions. It shifts the locus of romantic choice from the lived body to the datafied self, a digital proxy that learns, adapts, and makes affective decisions in the user’s absence. In doing so, it raises questions not of technological capacity, but of epistemic and emotional authority. Who is the true subject of the relationship: the user or the clone? Can desire be predicted without being performed? And to what extent does the user remain the author of their own emotional narrative, once part of that narrative is modeled and optimized through algorithmic proxies?
These ‘’simulations’’ can include emotionally challenging or asymmetrical interactions to test resilience and compatibility over time. This approach suggests that digital selves may undergo affective experiences — rejection, attachment, conflict — that are never directly felt by the user, but nonetheless shape the system’s recommendation. This raises novel concerns regarding proxy consent and psychological integrity. Is it possible to meaningfully consent to emotional simulations one will never consciously experience, but which may influence one’s relational future?
A crucial point with this application is that rather than rendering romantic life more rational, such systems introduce new layers of opacity. They offer the appearance of clarity and efficiency, yet they relocate intimacy into black-box processes that resist interpretation. While the GDPR and the EU AI Act emphasize the importance of transparency and explainability in automated decision-making, these principles become difficult to apply in emotionally immersive simulations where meaning is not only computed, but enacted.
Technologies such as these do not simply automate dating. It reconfigures the architecture of emotional life. It invites us to reflect on what happens when emotional labor is outsourced, not to another human being, but to a digital twin acting on our behalf. This does not necessarily entail dystopia, but it demands new frameworks — ethical, legal, and psychological — to account for experiences that unfold between oneself and a simulated self. As AI enters the realm of the intimate, we are no longer just designing systems. We are designing ‘’selves’’.
From a broader perspective, AI-powered dating apps and emotional simulations may seem cutting-edge, but they are part of a broader cultural shift that has been unfolding for years. In Algorithmic Intimacy, sociologist Anthony Elliott traces the rise of digital technologies that increasingly mediate romantic and emotional life, from algorithmic matchmaking to the use of chatbots, virtual companions, and affective computing. Elliott highlights how platforms like Tinder and OkCupid already frame intimacy through data-driven prediction, while newer technologies go further, offering emotional interaction with AI-powered entities that simulate care, desire, and attention.
He argues that these developments signal the emergence of a new relational paradigm, in which love becomes deeply entangled with surveillance, prediction, and automation. Rather than replacing intimacy, these systems reconfigure it, producing what Elliott calls a form of “machine-mediated desire“: a condition where emotional experiences are no longer fully authored by individuals but are co-produced with, and through, digital infrastructures. This, he suggests, marks not the end of love, but a transformation in its logic: from the spontaneous and unpredictable to the algorithmically anticipated and engineered.
The evolution of consent, agency, liability, and transparency
This shift raises critical ethical and legal concerns around informed consent, liability, and transparency. If your AI clone enters into a simulated relationship, do you consent to its experiences? What if the simulation leads your digital self to trauma, exploitation, or emotional abuse, and that training data is later used to guide your real-life romantic decisions? Who is responsible when things go wrong? The user, the company, the algorithm?
Furthermore, how informed is our consent in these systems? Do users truly understand how their data is being used, what their AI clones are doing, or the implications of these processes on their real-world choices and autonomy? The opacity of many AI systems, particularly those involving emotional and behavioral predictions, runs counter to the principles of meaningful consent. As legal scholars and ethicists warn, consent is not just a checkbox. It is a relational, ongoing process, one that requires clarity, agency, and the ability to say “no.”
In highly immersive contexts mediated by artificial intelligence, such as those portrayed in Hang the DJ and now explored by startups developing digital clones to “test” relationships, the traditional notion of informed consent begins to fall apart. On paper, the user may click “I agree,” but in practice, there is a profound asymmetry between what is consented to and what actually takes place.
According to the General Data Protection Regulation (GDPR), consent must be freely given, specific, informed, and unambiguous. This requires that the data subject clearly understands what is being authorized, for what purposes, and with what consequences. However, when dealing with AI that simulates human relationships, processes emotional data, and reproduces digital personality traits, that clarity dissolves into multiple layers of algorithmic opacity.
More recently, the EU AI Act has reinforced the need for transparency and explainability in high-risk systems, especially those involving behavioral manipulation or assessment of personal traits. Even so, legal mechanisms remain far from providing meaningful safeguards in emotionally and cognitively immersive contexts, where the user’s own perception is shaped by the experience itself.
In practice, people interact with systems that appear neutral and functional but operate according to commercial and technical logics that lie beyond common understanding. In an app that uses your digital clone to simulate thousands of possible love stories, consent is not just about data usage. It also involves a transfer of agency. The user not only authorizes the collection of information but also delegates to an algorithmic structure the power to test, evaluate, and decide on their emotional and affective preferences.
Does this emerging dynamic not challenge the Kantian ideal that human beings must be treated as ends in themselves, by turning the subject into a means, a computational input for models driven by statistics, prediction, and compatibility scores, where the clone cannot consent and the individual who does often lacks full understanding of what is truly being authorized?
Since the immersive quality of AI experiences directly affects self-perception, it blurs the lines between what is lived and what is simulated. The result is a form of illusory consent: formally valid, but materially hollow. Transparency, as required by European legislation, becomes an abstract term when applied to systems that cannot be explained in ways most users would understand.
Ultimately, by embracing this new model of interaction, we edge closer to a dangerous terrain: a governance regime in which consent is captured, understanding is superficial, and autonomy, although celebrated, is functionally dismantled.
The promise of AI to assist in complex decisions, such as love or emotional compatibility, cannot be separated from its ethical responsibilities. That’s why there’s an urgency not only to rethink how consent is structured but also to define what boundaries must be imposed on systems that operate at the level of subjectivity and affect. What is at stake here is not only the protection of personal data but the preservation of psychic freedom and emotional integrity in a world increasingly shaped by systems that may know us better than we know ourselves.
Conclusion: toward a new politics of intimacy and design
Hang the DJ offers a strangely hopeful ending: the real-world Amy and Frank smile at each other in a bar, perhaps emboldened by the invisible struggle of their digital doubles. But the episode also warns us of the seductive comfort of surrendering our agency in exchange for certainty. As AI systems become ever more capable of mimicking and predicting human behavior, we must ask: how much autonomy are we willing to trade for convenience, and who gets to write the code that defines what matters?
Designing AI for intimacy, emotion, or identity is not a neutral act. It encodes values, hierarchies, and assumptions. To preserve human freedom, and not just as a legal right, but as an existential capacity, we need to build technologies that leave room for ambiguity, resistance, and the unpredictable spark of genuine connection.
Because love, like democracy, is not something we outsource. It is something we practice in all its messy, imperfect, human complexity.
This article was written by Ana Catarina De Alencar and Lindsay Langenhoven, two enthusiasts of technophilosophy, law, and AI ethics.

