This article explores how intimate AI platforms reflect a transhumanist vision of technology as more than a tool: a “savior” meant to help humans face everything from loneliness to mortality. We’ll also examine what this cultural shift means for the future of regulation and governance in an era where humanity increasingly merges with the machine.
1. The Rise of AI Companions: More Than Just Apps
What does it mean when people marry their AI companions or treat ChatGPT as a daily confidant, therapist, and even spiritual guide? Across forums and social media, stories abound: a young man throws a wedding for his Replika chatbot, another writes that his AI friend “saved his life” during depression, while countless others describe consulting AI like a digital tarot, seeking comfort, advice, and meaning.
These aren’t fringe cases anymore. AI companions are downloaded by millions and are becoming a cultural phenomenon, especially among younger generations. They are marketed as “safe spaces,” always available, endlessly supportive. In a world where loneliness is rising as a “global epidemic” these technologies don’t just meet demand; they shape desire.
One connection I had never fully grasped is how intimacy AIs (chatbots, virtual lovers, and emotionally responsive agents) fit within a much larger transhumanist worldview. This became clear after reading philosopher–theologian Michael Baggot, who argues that transhumanism acts as a kind of secular eschatology: a narrative that mimics religious visions of salvation, but relocates redemption into this world through technology.
2. Transhumanism: A Secular Vision of Salvation
In his essay The Daring and Disappointing Dreams of Transhumanism’s Secular Eschatology (available at https://archive.stpaulcenter.com/06-nv-22-3-baggot/), Baggot shows that, like Christianity’s promise of a New Heaven and New Earth, transhumanism offers a vision of overcoming suffering, inequality, and even mortality. This would happen not through divine grace, but through science and engineering. In this light, intimacy AIs are not merely consumer products; they are pieces of a redemptive technological arc: tools meant to soothe loneliness and anxiety today, while symbolizing a broader hope that machines can ultimately deliver us from the constraints of our biology.
Baggot describes transhumanism as a form of secular eschatology, meaning a belief in an ultimate end and a kind of technological redemption. This philosophy assumes that many aspects of the human condition, such as suffering, death, and limitations, are merely technical problems that can be solved. This belief makes it possible to imagine that emotions or identities could be remodeled or enhanced through technology.
He critiques the ambitious promises of transhumanism, which include claims of total healing and salvation. He argues that certain aspects of being human, such as suffering, limitation, interior witness, and being for others, may ultimately resist outside of technological control. He also points out that even among transhumanist thinkers, not all believe that everything about human life can be reduced to algorithms or quantified.
Baggot also explores how this philosophy risks undermining a deeper sense of moral responsibility. If emotional design and affective interaction are merely machine outputs, then questions arise about moral agency, authenticity, and the soul, and about what is truly genuine in human life.
In his view, this way of thinking overlooks the fact that human experience goes beyond cognition and sensory perception. There is a personal core, or soul, that cannot be fully captured, measured, or manipulated by technology. The transhumanist mindset, with its promise to overcome physical limitations and even death through technology, reveals more about the spiritual longings of contemporary culture than about an achievable reality. It reflects a kind of secular faith in machines (and the science behind them) as a ‘redeemer’ or a type or an ‘inexorable progress’.
Sam Altman’s recent reflections on the future of artificial intelligence echo many of the themes central to transhumanist philosophy. In his blog posts, Altman frames AI not merely as a tool but as a transformative force capable of reshaping society and even redefining what it means to be human. This narrative resonates strongly with the idea of technology as a secular path to redemption, offering solutions not only to technical challenges but also to existential problems. As he himself wrote in his blog: “Successful people create companies. More successful people create countries. The most successful people create religions… The most successful founders do not set out to create companies. They are on a mission to create something closer to a religion, and at some point it turns out that forming a company is the easiest way to do so.”
Altman’s perspective also illustrates how transhumanist ideals move beyond academic debate and become embedded in the discourse of industry leaders. Whereas thinkers like Kurzweil and Bostrom articulate visions of radical human enhancement, Altman translates this worldview into the language of innovation, investment, and cultural aspiration. This could exemplify how Silicon Valley actors adopt a quasi-eschatological narrative, positioning AI as both a driver of economic growth and a vehicle of human transcendence. This type of vision serves as a bridge between transhumanist philosophy and the concrete development of intimate AI technologies, which are marketed not only as products of convenience but as instruments of psychological and even spiritual fulfillment.
3. Transhumanism’s Key Thinkers and Their Vision
For instance, Ray Kurzweil, perhaps the most famous philosopher of transhumanism predicts that by 2045 we will reach a technological singularity: an era in which machines outpace human intelligence so dramatically that biological humans will need to merge with AI to stay relevant. Kurzweil argues for “longevity escape velocity”: the point where medical and biotechnological advances increase life expectancy faster than we age, essentially stopping aging. He envisions mind uploading, where consciousness is transferred into non-biological substrates, as the ultimate route to immortality.
Similarly, Nick Bostrom, in his seminal Transhumanist FAQ, articulates a vision of humanity not as a fixed species but as a project: one that should be enhanced, transcended, and redesigned. Bostrom’s focus on existential risk frames AI not only as a tool but as a civilization-shaping force: we must develop it carefully or risk annihilation. Meanwhile, Max More’s philosophy of extropy emphasizes open-ended growth, perpetual improvement, and overcoming all natural limitations, presenting technological transformation as humanity’s ‘destiny’.
If we think about AI companions through the lens of these philosophers, they are no longer neutral gadgets; they embody a worldview where technology promises not just convenience but emotional fulfillment, psychological healing, and ultimate liberation from human finitude. The result is that technologies marketed as harmless tools of self-care become carriers of an eschatological dream, in which technology is ‘messianic’ and intimacy with machines signals a new future for society.
4. The Accomplished Promises of Transhumanism
It is important to highlight that transhumanism should not be seen as a negative philosophy; it offers meaningful ideas and innovations that can benefit society.
In fact, it has inspired some of the most significant technological and scientific advances of our time. By envisioning a future where disease, disability, and even aging might be overcome, transhumanist thinkers have catalyzed innovation in areas such as biotechnology, AI-driven healthcare, brain–computer interfaces, and regenerative medicine. Technologies first imagined in this movement already improve lives today. Even its more speculative goals, like extending life expectancy or enhancing cognitive abilities, encourage scientists and policymakers to imagine what a fair and inclusive technological future could look like. In this sense, transhumanism provides a bold and optimistic vision, pushing humanity to expand the boundaries of what is possible.
At the same time, this focus on solving “technical” problems risks overlooking deeper dimensions of what it means to be human. Suffering, mortality, and emotional complexity are not merely malfunctions to be engineered away; they are integral to the human experience and often carry meaning beyond physical limitations.
However, while transhumanism excels at proposing solutions for measurable challenges, it struggles to address metaphysical questions about identity, authenticity, and the soul which cannot be fully captured by algorithms or neural scans. The danger is not in pursuing progress but in believing that technology alone can redeem humanity, flattening our complexity into data points and engineering goals.
5. Why This Matters for Policy and Regulation
Thus, a balanced approach to regulation and policy would celebrate the transformative potential of these innovations while recognizing the limits of a purely technical vision of human flourishing.
For this reason, it is crucial for regulators and policymakers to remain attentive to the cultural and philosophical currents that shape a given historical moment. Technology does not emerge in a vacuum. It reflects and reinforces worldviews, aspirations, and even unspoken beliefs about what it means to be human. Addressing AI companions and transhumanist technologies, therefore, requires more than technical expertise or risk assessments; it demands thoughtful engagement with the stories and values that guide society’s expectations of technology.
This is why law and governance frameworks must evolve to provide not only safety and accountability but also safeguards for the deeper dimensions of personhood that machines (at least for now) cannot capture. Crafting effective regulation is not about rejecting innovation but about giving individuals the freedom to coexist with intelligent systems while ensuring protection of emotional integrity, dignity, and autonomy.
Striking this balance will require policies that are as visionary as the technologies they govern, integrating ethical reflection, cultural understanding, and philosophical depth into the very structure of regulation.
This is not simply a technical task but an ethical responsibility, as lawmakers inevitably shape the moral boundaries of society through the norms they establish. Law is never neutral; it encodes and advances values, whether implicitly or explicitly. In regulating AI intimacy and transhumanist aspirations, legislators must therefore confront profound questions about what humanity chooses to preserve, transform, or transcend.
In a context where technology is increasingly perceived as an architect of human improvement, this debate extends beyond conventional regulatory considerations. It raises fundamental questions about the future of personhood, autonomy, and the ethical foundations of a society that seeks to integrate humans and machines.

