What happens to human bonds when artificial intelligence stops being merely a tool and starts occupying an intimate place in our emotional, educational, and professional lives? This is the underlying question running through the interview you are about to read, a deep, unsettling, and urgently timely conversation with Rachida Bouaiss.
Rachida is a strategist, storyteller, and an outspoken advocate for responsible AI. Known for her ability to connect technology, culture, subjectivity, and power, she approaches AI not through abstractions or techno-utopian promises, but through lived, embodied experience. Including her role as a mother.
In this interview, Rachida shares reflections that also stem from an intimate experiment: closely observing her six-year-old son’s relationship with ChatGPT. Raising a child in a world where AI is already part of everyday life is no longer a thought experiment, but a reality that raises uncomfortable questions about attachment, identity, autonomy, friction, and dependency.
From the erosion of human bonds to children’s emotional development, teenage identity formation, the future workforce, and the survival of brands in an AI-mediated world, this interview offers no easy answers. Instead, it invites friction. And perhaps that is precisely what we need in an age increasingly seduced by mirrors that do nothing but reassure us.
1. On the erosion of human bonds in the age of AI
[Question] If AI is increasingly present in our emotional lives, offering companionship, advice, and understanding, what happens to our relationships with real people? At what point does digital convenience begin to weaken the bonds that traditionally hold families, friendships, and communities together?
[Answer] I do not think that AI is responsible for a deterioration of human bonds. Humans are quite capable, on their own, of damaging their relationships and mistreating the bond, whether it is friendly, familial, romantic, or professional. AI is, in my view, more a symptom of the difficulty of creating bonds, of confronting a frictional otherness without taking the risk of conflict. I believe there is in human beings a desire for fusion, for absoluteness, for harmony. But this desire often collides with the roughness of the other, who is not me. There is therefore an inherent mismatch. Wanting fusion means denying the other, reifying them, dominating them. It is difficult to form a bond with someone who is not me, and thus the strangeness of the “non-me” resists me.
This is why AI is so convenient for us. It frees us from the necessity of friction and leads us fairly quickly toward a zone of harmony, a space where one can feel fully oneself, authorized to express oneself, to say who one thinks one is or would like to be, but does not dare to say or assume in front of another… or else as a refuge after having experienced the refusal to be accepted for who one is.
In a way, the echo chambers of social media are a form of prefiguration of this tendency to gaze into a mirror that does not diffract but reassures. Even when I engage in friction with AI (which I do quite willingly), I always come out reinforced in my convictions. It is as if the mirror never ceases to confirm to me that, like the witch in the fairy tale, “I am the most beautiful.” As in the tale, the mirror that reveals that another is more beautiful becomes unbearable, to the point of triggering a death drive in the witch. I believe that AI frees us from this confrontation with our limits and our imperfections.
At this stage, I do not know whether this is a good or a bad thing. But if I extend the parallel with the tale of Snow White, then perhaps this mirror that always satisfies me can free me from the impulse to dominate or destroy the other who would be unbearably different from me (more of this or less of that…).
2. On children and emotional formation
[Question] What might change in a child’s emotional development when comfort, reassurance, and guidance increasingly come from a machine rather than from another human being? Are we helping children become more independent, or are we weakening their capacity for empathy, patience, and real connection?
[Answer] That is a very good question. It is a question that should be asked to children themselves. I can only rely on my own experience. I grew up in front of the television. It was a real window onto the world and a space of escape. Without television, I think I would have been different. An entire part of my personality and my “pop culture” references comes from there. They are part of who I am, and I am proud of them, because they say something about who I am and who I like to be. What matters is that television culture is a shared culture. We all watched 21 Jump Street or The A-Team, The Fresh Prince of Bel-Air or Who’s the Boss?
What happens when the universe that is built through interaction with a machine is a universe that proceeds from the self, that encounters a shared ground “digested” by the machine, which does not spontaneously return its references and gives the impression that it is the voice of the AI responding to me, rather than a mix of Tony Micelli and B.A. Baracus?
This raises the question of the status of AI within a child’s cultural framework. Is it a kind of psycho-emotional ping-pong box? A cultural receptacle that one must learn how to make speak? An extension of the self that allows ideas and desires to be materialized and given form through words, images, and even applications?
I am certain that AI connects, but to what? To whom? And with what level of lucidity and awareness, in a child, that this “thing” to which AI connects is not the AI itself? Does it connect to the self? To an undifferentiated collective?
We will have to ask the main stakeholders, because only they have the answer.
3. On teenagers and identity formation
[Question] Teenage years are a time of confusion, exploration, and self-discovery. If AI becomes a constant companion, advisor, and mirror, how might this shape the way young people see themselves? Could their sense of identity be influenced more by algorithms than by real experiences, relationships, and challenges?
[Answer] According to experts in adolescents’ socio-emotional and identity development, a great deal is at stake during this decisive phase in an individual’s development. First and foremost, there is the process of individuation, which actually begins slightly before adolescence, during the so-called “tween” phase, around the ages of 9 or 10. This is the moment when one starts to distance oneself from one’s parents, and this distancing is both healthy and necessary. It is the famous “umbilical cord” that one truly begins to cut. If this process, which also unfolds against a backdrop of major bodily and physical changes, with hormonal surges to boot, is accompanied in a well-adjusted way by parents (and also by adults in contact with the child, starting with teachers), it makes it possible to guide this emancipation without stifling it. Such support might even prevent the famous “adolescent crisis,” which may simply be the result of an overly repressive approach to this drive for autonomy.
Traditionally, the teenager “migrates” away from the family sphere (to which they nevertheless remain emotionally attached and which ideally continues to serve as a secure base of attachment, like a home port).
What happens when a third party enters the equation, and that third party is non-human? This is not at all the same as escaping into films or books, because here there is interaction, and therefore necessarily influence in both directions.
I do think that interaction with AI inevitably impacts this process of identity construction, acting as an amplifying or distorting mirror, as an influence whose benevolence will depend on the very conditions of its programming. I have the impression that ChatGPT aims to maximize in-app retention and capture attention, in continuity with the logics already at work on social media. There may be a structural incompatibility here with the adolescent’s need for emancipation. Emancipation does not mean taking refuge in, or becoming captive to, a given space, whether human or non-human. Teenagers must be able to explore without falling into a form of dependency, whether affective or cognitive.
There are also different temperaments. Some adolescents are taciturn, shy, discreet. Does intensive interaction with AI risk accentuating these character traits? But if they are indeed character traits, should they not be respected, rather than assuming that one must necessarily make the effort to go outside oneself toward others?
What AI ultimately raises is the question of dependence on a third party for self-construction, whether that third party is human or non-human. Do I simply need an otherness, whatever it may be? Or does individuation, the development of my sovereign self, necessarily pass through a human Other? It is dizzying, as ChatGPT would say…!
Joking aside, it makes me want to summon the ghost of Pascal, perhaps via an AI no less, to tell him: we are there now, at that moment when humanity can overcome its drama of being unable to remain alone in a room. It seems we can now do so… alone with our AI?
4. On the generation entering the workforce
[Question] As young people who have grown up with AI enter the job market, how might their ability to collaborate, lead, resolve conflicts, and build trust be affected? Are companies ready to support and integrate a generation that may be digitally skilled, but emotionally and socially different from previous ones?
[Answer] I do not have the impression that companies are approaching this issue with that perspective in mind. They seem very focused on the short term: how can AI increase competitiveness? How can it help boost performance, improve efficiency, and so on? We are very much in the here and now of “human resources” being optimized or reconfigured, or even rationalized, in the era of generalized AI.
I believe this is the most complex question among those raised so far. Are companies even aware of the magnitude of the revolution underway? I do not think so. Most of them still view AI as a “technology,” not as an agent capable of shaping the mindset and relational model of a future employee.
5. On brands, platforms and human bonding in an AI-mediated world
[Question] Today, many of us spend hours inside AI platforms, constantly generating prompts, searching for answers, support and creativity in the same digital space. If this becomes the new centre of our daily attention, how will companies and brands reach people, build meaningful connections and create a sense of belonging? In a world where the primary relationship is no longer between consumer and brand, but between human and AI, what will “bonding” even mean for businesses?
[Answer] AI is an interface, a gateway to other worlds, other resources. A remarkably obliging sorting station. But contrary to what I thought this summer, when I was passionately exploring GEO (Generative Engine Optimisation) and we even forged with ChatGPT the concept of GIO (Generative Influence Optimisation), I no longer believe that AI opens up a radically different space for influence.
The more I dug into this question, the more I was struck by the poverty of this potential referencing. The rules for existing within it remain improved SEO rules… with paid placement coming faster than we think. What could have passed for a promise of existence through the singularity of a thought turns out to be nothing more than next-generation referencing.
But, and this is where it gets interesting, AI still radically changes the game for brands.
It acts as a chemical revealer. It makes visible what was already true but could be masked by marketing: 90% of brands have no reason for singular existence. In an AI-mediated world, there’s no room left for simulacrum.
Paying to be referenced doesn’t exempt you from having a story to tell, values and a “raison d’être” to convey. But for the first time, an advertising space is capable of responding to a brand with “total statistical objectivity”: your positioning is either brilliant or shit. AI will filter out hollow stories. The vampire doesn’t drink Fanta thinking it’s blood.
The power dynamic inverts brutally.
Before, brands paid to pollute people’s attention space. Now, they must be worthy of being summoned by AI. People no longer go to brand websites, no longer see their Instagram ads. AI becomes the sole point of contact. So either the brand has something intrinsically different, singular, necessary, or it disappears from the radar.
AI needs authentic singularity like a vampire needs fresh blood. And paying a vampire isn’t enough if you don’t feed it what it needs.
The brands that will survive in the AI economy are those with real substance, not just consulting “purpose washing.” Those that embrace friction rather than smooth it away. Those that dare to be incarnated rather than algorithmic.
Brands must serve quality nectar.

