Both humans and AI hallucinate, but not in the same way
The rollout of ever-capable large language models (LLMs) like GPT-3.5 has generated a lot of interest over the past six months. However, trust in these models has declined as users have found they can make mistakes and that, just like us, they aren’t perfect. An LLM that gives out misinformation is said to be hallucinatory … Read more