A BBC study published yesterday (PDF) found that AI news summarization tools frequently generate inaccurate or misleading summaries, with 51% of responses containing significant issues. The Register reports: The research focused on OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity assistants, assessing their ability to provide “accurate responses to questions about the news; and if their answers faithfully represented BBC news stories used as sources.” The assistants were granted access to the BBC website for the duration of the research and asked 100 questions about the news, being prompted to draw from BBC News articles as sources where possible. Normally, these models are “blocked” from accessing the broadcaster’s websites, the BBC said. Responses were reviewed by BBC journalists, “all experts in the question topics,” on their accuracy, impartiality, and how well they represented BBC content. Overall:
– 51 percent of all AI answers to questions about the news were judged

Link to original post https://news.slashdot.org/story/25/02/12/2139233/ai-summaries-turn-real-news-into-nonsense-bbc-finds?utm_source=rss1.0mainlinkanon&utm_medium=feed from Teknoids News

Read the original story