Both humans and AI hallucinate, but not in the same way

The rollout of ever-capable large language models (LLMs) like GPT-3.5 has generated a lot of interest over the past six months. However, trust in these models has declined as users have found they can make mistakes and that, just like us, they aren’t perfect.

An LLM that gives out misinformation is said to be hallucinatory and there is now a growing research effort to minimize this effect. But as we tackle this task, it’s worth reflecting on our capacity for bias and hallucination and how this affects the accuracy of the LLMs we create.

By understanding the link between the hallucinatory potential of AI and our own, we can begin to create smarter AI systems that will ultimately help reduce human error.

How people hallucinate

It’s no secret that people invent information. Sometimes we do it intentionally and sometimes unintentionally. The latter is the result of cognitive biases, or heuristics: mental shortcuts that we develop through past experiences.

These shortcuts often arise out of necessity. At any given time, we can only process a limited amount of the information flooding our senses and remember only a fraction of all the information we’ve ever been exposed to.

Therefore, our brains must use learned associations to fill in the gaps and respond quickly to any question or dilemma that arises before us. In other words, our brain guesses what the correct answer might be based on limited knowledge. This is called confabulation and is an example of human bias.

Our biases can lead to poor judgment. Take the automation bias, which is our tendency to favor information generated by automated systems (like ChatGPT) over information from non-automated sources. This bias can lead us to miss mistakes and even act on false information.

Another relevant heuristic is the halo effect, where our initial impression of something influences our subsequent interactions with it. And fluency bias, which describes how we prefer information presented in an easy-to-read manner.

The bottom line is that human thinking is often colored by its own cognitive biases and distortions, and these hallucinatory tendencies occur largely outside of our awareness.

How the AI ​​hallucinates

In an LLM context, hallucinations are different. An LLM is not trying to conserve limited mental resources to make efficient sense of the world. Hallucination in this context only describes a failed attempt to predict an adequate response to an input.

However, there is still some similarity between how humans and LLMs hallucinate, as LLMs also do it to fill in the gaps.

LLMs generate a response by predicting which word is most likely to appear next in a sequence, based on what happened before and the associations the system has learned through training.

Like humans, LLMs try to predict the most likely response. Unlike humans, they do without comprehension what are they saying. This is how they can end up producing nonsense.

As for why LLMs hallucinate, there are a number of factors. One of the principals is trained on bad or insufficient data. Other factors include As the system is programmed to learn from this data and how this programming is reinforced through further training under humans.

Do better together

So if both humans and LLMs are susceptible to hallucinations (albeit for different reasons), which is easier to fix?

Fixing the data and training processes underpinning LLMs may seem easier than fixing ourselves. But this fails to account for the human factors that influence AI systems (and is an example of another human bias known as the fundamental attribution error).

The reality is that our failures and the failures of our technologies are inextricably intertwined, so fixing one will help fix the other. Here are some ways we can do this.

  • Responsible data management. Biases in AI often arise from biased or limited training data. Ways to address this include ensuring that your training data is diverse and representative, creating bias-sensitive algorithms, and implementing techniques such as data balancing to remove biased or discriminatory models.
  • Transparency and explainable AI. Despite the above actions, however, biases in AI can remain and can be difficult to detect. By studying how biases can enter and propagate within a system, we can better explain the presence of bias in outputs. This is the basis of explainable AI, which aims to make the decision-making processes of AI systems more transparent.
  • Put public interests front and center. Recognizing, managing and learning from biases in an AI requires human responsibility and the integration of human values ​​into AI systems. Achieving this means ensuring that stakeholders are representative of people from different backgrounds, cultures and perspectives.

By working together in this way, it is possible for us to build smarter AI systems that can help keep all of our hallucinations under control.

For example, AI is used in the healthcare industry to analyze human decisions. These machine learning systems detect inconsistencies in human data and make recommendations that bring them to the attention of doctors. Thus, diagnostic decisions can be improved by maintaining human responsibility.

In a social media context, AI is being used to help train human moderators when attempting to identify abuse, for example through the Troll Patrol project aimed at tackling online violence against women.

In another example, the combination of AI and satellite imagery can help researchers analyze differences in nighttime illumination across regions and use that as a proxy for an area’s relative poverty (where more illumination correlates with less poverty). .

Importantly, as we do the essential work to improve the accuracy of LLMs, we should not ignore how their current fallibility mirrors our own.The conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

#humans #hallucinate
Image Source : www.psypost.org

Leave a Comment