Publications & Insights The Risks of Generative AI: Fake News
Share This

The Risks of Generative AI: Fake News

Wednesday, 24 July 2024

Generative AI products are trained on large databases, which may not always be accurate and in some cases may contain biases. Some AI tools are also known to "hallucinate" - meaning they generate inaccurate or irrelevant information.

In 2023 a New York law firm was fined for using ChatGPT to create court pleadings to support its client's claim for damages. When they proceedings came to court, it became apparent that none of the cases cited in the pleadings were real. The AI program couldn't find any cases that backed the claim, so just made them up.

So if you, or unknown to you - your employees, are using generative AI to produce information about your company's products, finances, or to generate proposals to customers, there is a risk this information could be inaccurate. Therefore, you might want to consider some form of human intervention.

For more information, contact Victor Timon or other members of our Technology Team