According to a groundbreaking study by researchers from Stanford and Google DeepMind, if you spend at least 2 hours in an interview with AI, it can create a virtual replica of you that embodies your values and preferences with stunning accuracy.
In the study, 1,000 participants from diverse social, gender, and educational backgrounds were interviewed. Researchers claim they successfully created AI replicas of their personalities, achieving an 85% similarity in follow-up personality and behavior tests.
One of the researchers, Joon Sung Park, made a bold statement:
“If you can have a bunch of small ‘yous’ running around and actually making the decisions that you would have made—that, I think, is ultimately the future.”
And interviews aren’t the only way to create digital twins.
Companies like Tavus are exploring AI models that can replicate users’ personalities by analyzing data such as emails. However, this usually requires large datasets.
The vision of these researchers and IA companies?
A future where AI clones—or digital twins—could handle everyday tasks for you, saving time and effort. Your AI clone might do your grocery shopping, respond to friends, schedule meetings, and even accept invitations, all while reflecting your personality and preferences—possibly even your “subjectivity’’?
This could revolutionize how we interact with technology, essentially giving everyone their own personal chatbot or digital twin.
But here’s the question: is it ethical to clone real humans with AI and let these clones represent us?
As an AI lawyer, I can’t help but think about the risks.
Could these agents be weaponized to create harmful deepfakes or manipulate public discourse? How do we ensure informed consent and protect against misuse of such powerful technology?
Do you want to know more about AI Cloning and Digital Twins? Check on the article I wrote recently on the topic: https://lnkd.in/e6rJcV99
More details about the mentioned study can be found: https://lnkd.in/eb-7Z2TM
Images generated by Dall-e based on the author’s prompt.
Leave a Reply