Create your very own Auto Publish News/Blog Site and Earn Passive Income in Just 4 Easy Steps


When the latest version of ChatGPT was released in May, it included some emotional voices that made the chatbot sound more human than ever before.

Listeners called the voices “flirtatious,” “convincingly human,” and “sexy.” Social media users said they “fell in love” with them.

But on Thursday, ChatGPT creator OpenAI released a report confirming that ChatGPT's human-like upgrades could lead to emotional dependence.

“Users could build social relationships with the AI, reducing their need for human interaction – potentially benefiting lonely people, but potentially impacting healthy relationships as well,” the report said.

Related: Only 3 of OpenAI's original 11 co-founders remain at the company after another leader leaves

ChatGPT can now answer questions voice-to-voice and remember important details to use to personalize the conversation, OpenAI noted. The impact? Conversing with ChatGPT now feels almost like talking to a human – if that person doesn't judge you, never interrupts you, and doesn't hold you accountable for what you said.

These standards of interaction with an AI could change the way people interact with each other and “influence social norms,” the report says.

Say hello to GPT-4o, our new flagship model that can reason across audio, image and text in real time: https://t.co/MYHZB79UqN

Text and image input are rolling out to API and ChatGPT today, voice and video in the coming weeks. pic.twitter.com/uuthKZyzYx

— OpenAI (@OpenAI) May 13, 2024

OpenAI said that early testers spoke to the new ChatGPT in a way that showed they could form an emotional connection with it. Testers said things like “This is our last day together,” which OpenAI said expressed “common bonds.”

Experts are now wondering whether it is time to rethink the realism of these voices.

“Is it time to stop and think about how this technology impacts human interaction and relationships?” Alon Yamin, co-founder and CEO of AI plagiarism checker Copyleaks, told Entrepreneur.

“[AI] should never be a substitute for actual human interaction,” Yamin added.

To better understand this risk, further testing over longer periods of time and independent research could be helpful, according to OpenAI.

Another risk OpenAI pointed out in the report was AI hallucinations or inaccuracies. A human-like voice could inspire more trust among listeners, leading to less fact-checking and more misinformation.

Related: Google's new AI search results are already hallucinating

OpenAI is not the first company to speak out about the impact of AI on social interactions. Last week, Meta CEO Mark Zuckerberg said that many Meta users use AI as emotional support. The company is also reportedly trying to pay celebrities millions to clone their voices for AI products.

OpenAI's release of GPT-4o sparked a debate about the safety of AI following high-profile resignations by leading researchers such as former chief scientist Ilya Sutskever.

Scarlett Johansson also accused the company of developing an AI voice that sounds “eerily similar” to her own.

Create your very own Auto Publish News/Blog Site and Earn Passive Income in Just 4 Easy Steps

LEAVE A REPLY

Please enter your comment!
Please enter your name here