Home  /  News & Eventi  /  How human interaction will save news. The Overtone Media Futures’ project

12 June 2023

How human interaction will save news. The Overtone Media Futures’ project

“What can you, as a person or even as a publisher, know about an article online? When you have a url in front of you, there is little to tell you what is inside”, says Christopher Brennan, who is a former journalist and now one of the founders of Overtone Ai, a company that works with Artificial intelligence to fight fake news. “Misinformation can lurk in the shadows”, adds Brennan. 

The project focuses on evaluating articles based on their content, on the qualities of the text itself, and not relying only on metrics likes and shares. Overtone teaches Artificial Intelligence programs to read texts like humans. The company already uses this system of qualitative analysis to create the type of data that is useful for creative, analytics and business teams at media companies and beyond. Now they want to use it to address the problem of misinformation. 

They have created small task forces of experts in the fields of journalism and communication who analyzed large amounts of text and built a large language model that can accurately and automatically label whether a paragraph is factual, opinionated, journalism, or potentially toxic. This type of label can appear when clicking on the article, so that the reader knows the percentage of false or imprecise information that can hide behind a text before reading it. 

They believe in the role of humans to understand a text. “If we used normal large language models, and not an algorithm trained by media professionals, we would have obtained all sorts of wrong answers. For us it's important to underline that our algorithm has been trained by professionals and not just by giving it to read large amounts of text”.

“For Media Futures we called analysts and normal people to label the content. The human role is very important for us. The model allowed us to make sure that as soon as the text exists, we can understand if it's potentially toxic or harmful, even before knowing which news outlet has published it. In fact, every newspaper is vulnerable to fake news and misinformation, even the important ones, that we normally consider “reliable”. Our algorithm is not influenced by the name of the publisher, and it’s based only on the content that it finds on the text. This allows us to find misleading content also in articles of papers and websites that are normally considered trust-worthy”. 

“Our plan is to sell our data to publishers and help them to grow their audience, understand which contents are better for a specific type of audience, but also to fight disinformation”. 

“I come from news, I’m pretty familiar with this type of work and I understand that this type of work is incredibly useful for them. We help them search through their articles and to get data that is directly available from the text itself. Our Data can live in the CMS of various news outlets. We’re immediate and we can do the same thing for every article”.

“The rise of large language models makes the problem that we’ve been already working on more urgent and serious. We asked ourselves a simple question: with Chat GPT everyone will be able to write a large number of texts at a really fast pace, but who will be able to read and classify it? Who is going to be able to read and apply filters to that content? Our model will do the job, using the same technology. So that people can actually choose what to do with that content”.

Signed by Beatrice Offidani