Wednesday, October 22, 2025

AI assistants caught spreading false news nearly half the time, 22 broadcasters expose massive trust crisis, Google’s Gemini fails 76% of tests and media stays silent.

Must read

Public broadcasters just dropped a bombshell: nearly half of all automated news responses are wrong and the companies behind them are still pushing these systems into classrooms, search engines, and government portals without friction, disclaimers, or oversight. This isn’t a tech hiccup. It’s a systemic failure, coordinated across platforms and languages, where fabricated quotes, misdated events, and missing sources are being served as fact to millions of users daily. And the media? Mostly silent. The institutions that should be sounding the alarm are instead burying the findings under tech verticals and press releases.

“Leading assistants misrepresent news content in nearly half of their responses… 45% of analyzed responses contained significant issues, with 81% having some form of error” …“Sourcing errors were found in a third of responses… particularly affecting answers about current events” — Devdiscourse, October 21, 2025 https://www.devdiscourse.com/article/technology/3670017-ai-assistants-mislead-in-almost-half-of-news-responses-ebu-bbc-study

“Poor sourcing was the biggest problem… Google’s Gemini model performed the worst, showing significant issues in 76% of its replies” …“These failures endanger public trust, as news organizations are often incorrectly associated with false claims” — WinBuzzer, October 22, 2025 https://winbuzzer.com/2025/10/22/ai-assistants-get-news-wrong-in-45-of-cases-landmark-bbc-ebu-study-finds-xcxwbn/

“One out of every five answers contained major accuracy issues… including hallucinated details and outdated information” …“Gemini performed worst… more than double the error rate of other systems” — MSN, October 22, 2025 https://www.msn.com/en-in/news/other/ai-not-a-reliable-source-of-news-eu-media-study-says/ar-AA1OWuu0

The study spanned 22 broadcasters in 18 countries, testing four platforms in 14 languages. Gemini failed in three out of four responses, often fabricating citations or mistaking satire for fact. These systems don’t just mislead they mimic authority so convincingly that users mistake confidence for truth. The companies behind them have issued no warnings, no fixes, no public response. If nearly half of responses are wrong, fiction is being wired into public infrastructure. That’s not innovation. It’s institutional betrayal.

 

Source link

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article