
Source
Deutsche Welle (DW)
Summary
A landmark study by 22 international public broadcasters, including DW, BBC, and NPR, found that leading AI chatbots—ChatGPT, Copilot, Gemini, and Perplexity—misrepresented or distorted news content in 45 per cent of their responses. The investigation, which reviewed 3,000 AI-generated answers, identified widespread issues with sourcing, factual accuracy, and the ability to distinguish fact from opinion. Gemini performed the worst, with 72 per cent of its responses showing significant sourcing errors. Researchers warn that the systematic nature of these inaccuracies poses a threat to public trust and democratic discourse. The European Broadcasting Union (EBU), which coordinated the study, has urged governments to strengthen media integrity laws and called on AI companies to take accountability for how their systems handle journalistic content.
Key Points
- AI chatbots distorted or misrepresented news 45 per cent of the time.
- 31 per cent of responses had sourcing issues; 20 per cent contained factual errors.
- Gemini and Copilot were the least accurate, though all models underperformed.
- Errors included outdated information, misattributed quotes, and false facts.
- The EBU and partner broadcasters launched the “Facts In: Facts Out” campaign for AI accountability.
- Researchers demand independent monitoring and regulatory enforcement on AI-generated news.
Keywords
URL
Summary generated by ChatGPT 5


