A study proves that AI can fool humans: the future is already here and you might not like it

In February, a man created a buzz on the web by developing a program, based on ChatGPT technology, to chat on Tinder. Via artificial intelligence, he chatted with 5,200 women. And it was thanks to this method that he ended up finding a wife. But is AI today so efficient that it can deceive humans? According to a study, it is already almost impossible to tell the difference between a conversation with a chatbot like ChatGPT and a conversation with a real person.

A revealing test

Researchers from the Department of Cognitive Sciences at the University of California San Diego conducted a series of Turing tests (which consists of determining whether AI is capable of reasoning like a human) on ELIZA, a chatbot from the 1960s, the GPT-3.5 model, and the GPT-4 model. In essence, study participants had 5-minute discussions, either with a human or with one of these chatbots. Then they were asked if they were chatting with a human or an artificial intelligence. And the results were astonishing. Indeed, if only 22% of people who chatted with ELIZA thought they were talking to a human, this rate is very high, 54%, among people who chatted with GPT-4. On the other hand, the rate is 67% among people who chatted with real humans.

Note that for GPT-4 to be convincing, the researchers personalized the model’s responses via a personalized prompt, which for example defined the tone to adopt. But in any case, in addition to showing how advanced GPT-4 is, the study also warns of the dangers of this technological development. “The findings have implications for debates around artificial intelligence and, more urgently, suggest that the deception of current AI systems may be going unnoticed”we read in the researchers’ publication, which is hosted on Arxiv.

Even smarter AI is already in the pipeline

The study highlights the ability of GPT-4 to deceive humans, while OpenAI is already working on an even more advanced version of this model. In an interview, Mira Murati, CTO of OpenAI, discussed the next evolution of ChatGPT technology. This compares the intelligence level of GPT-3 to that of a child. As for GPT-4, he would have the intelligence level of a high school student. The next model, on the other hand, would have the intelligence of someone with a doctorate. “Things are changing and improving quite quickly”, she explains. And if OpenAI does not have a precise date, for the moment, Mira Murati estimates that this model with a doctoral level could arrive within a year and a half.

In a press release announcing the creation of an internal safety committee, OpenAI also discussed the development of its next AI model: “OpenAI recently began training its next frontier model and we anticipate that the resulting systems will take us to the next level of capabilities on our path to AGI. While we are proud to build and bring to market models that are industry-leading in capabilities and safety, we look forward to a robust debate at this important time.”

  • It’s increasingly difficult to know whether you’re chatting with a human or an artificial intelligence
  • In a Turing test, 54% of participants who chatted with GPT-4 thought they were chatting with a real person
  • However, OpenAI is already developing an even more advanced version of its artificial intelligence model

📍 To not miss any news from Presse-citron, follow us on Google News And WhatsApp.