AI Hallucinations – Don’t Trust GenAI, Yet

AI hallucinations are a growing concern.

Generative AI models like ChatGPT and Bard have been known to generate inaccurate or misleading outputs. This is because they can “hallucinate” — generate text that is not supported by the input data.

What is an AI hallucination?

In the world of AI, a hallucination is a nonsensical or unfaithful output generated by an AI model. This can happen when the model makes wrong correlations from the training data, has knowledge or exposure biases, or attends to wrong inputs.

Examples of AI hallucinations

In one case, ChatGPT generated fictitious legal decisions and quotes in a lawsuit against an airline. In another instance, it generated a false claim about the involvement of an Australian Mayor in a bribery scandal.

How to prevent AI hallucinations

There are a number of ways to prevent AI hallucinations. These include using better training data, reducing knowledge or exposure biases, and improving the attention mechanism of the model.

What to do if you encounter an AI hallucination

If you encounter an AI hallucination, it is important to be aware that the output is not reliable. You should verify the information with other sources before taking any action.

Advertisements


Categories: Blog

Tags: , , , , ,

1 reply

Trackbacks

  1. What are Large Language Models (LLM)? - SciTechGen.Com

Leave a Reply

Discover more from SciTechGen.Com

Subscribe now to keep reading and get access to the full archive.

Continue reading