Fact-checkers need an AI tool for ‘claim-matching’

The spread of misinformation that the world is facing after the rise of social networks is an increasingly complex and dangerous challenge for today’s societies. The ease and speed with which false content is generated clashes head-on with the difficulties faced by fact-checkers in countering information manipulation in the digital environment, as the processes of detection, evaluation, verification and publication are much more demanding. As explained by Brandolini’s Law or the Law of Asymmetry, dismantling a hoax is much more costly in terms of effort and time than spreading and viralising it.
With the advent of artificial intelligence, the universe of fake digital content is expected to be much larger. However, AI itself can be a tool to speed up the verification process.
This is the case of the MuseAI tool, which aims to speed up the detection of false claims through ‘claim matching’, so that fact-checkers can immediately compare the hoaxes they find on the web with already verified and substantiated verifications. In addition, the presence of content in different formats, such as audio or video, is an additional challenge for fact-checkers to extract information. Thus, MuseAI will also work as a multi-lingual and multi-format application, processing transcripts of video and audio in multiple languages and comparing them against disinformation databases to reduce the analysis time that fact-checkers must spend in their daily work.
For fact-checkers, it is essential that applications designed to speed up their work, such as MuseAI, allow for easy input of suspicious keywords or phrases and provide quick results with clear visualizations of the matches found.
The results section should highlight the source, a quick and visible explanation of why the message match was detected, and context related to the data, geolocation, and language.
Disinformation is often recycled, and similar falsehoods reappear over time, therefore MuseAI focuses on identifying messages that have already been disproved and calculating the possible coincidence of the messages being analyzed.
Sometimes the message is not a perfect match, but it has many signs of being related to others that have been disproved, so it is necessary to determine the likelihood that these messages are likely to be false by comparing them with previous verifications.
Input
The text box, into which fact-checkers can copy and paste the message they wish to analyze, should be prominent and support large text entries, with clear instructions at the top. This text box can also be parsed based on keywords that may identify a matching claim, as well as claims that may be very similar to the original and provide scoring results for the likelihood of being worthy of verification.
Fact-checkers also need a URL input field where they can paste in links to the websites or social media platforms where the posts were shared, so that they can then display the results below the input fields in a clean and organized manner, highlighting matching claims and linking to detailed rebuttal information found in the databases.
From video to text
MuseAI aims to analyze multilingual videos on social networks to facilitate their verification. To this end, its interface will be designed to parse video files and video URL links, allowing the files to be uploaded and the transcripts and corresponding translations to be downloaded.
After processing, the transcribed text needs to be presented in a clean and readable format, by comparing the original text with the translated text for reference, and by including features such as highlighting or timestamping and keywords to help users navigate through the content.
Narrative clustering
In addition, it is also necessary for fact-checkers to be able to analyze the narratives that are being developed in targeted networks, so message language analysis is also critical in grouping false messages under verification by narrative.
As such, MuseAI will identify and group related hoaxes by analyzing thematic similarities and recurring patterns, providing detailed examples and context, and allowing fact-checkers to understand the scope and variation of misinformation.
By providing both specific matches and related narratives, the application will enhance the fact-checking process, providing deeper insight into the disinformation landscape and supporting more comprehensive rebuttal efforts.
Enriching the database
Finally, fact-checkers must be able to enrich the database and therefore the application’s capacity to support future searches. Hence, it is necessary to have an intuitive and easily accessible functionality allowing fact- checkers to gradually improve the content used by the AI tool. This would ensure that data from existing verifications will not be lost, allowing fact-checkers to broaden the scope of ‘claim matching’ and make MuseAI a living application.
Post by EFE Verifica