How War Led to AI Fighting Fake News

How War Led to AI Fighting Fake News

War is the worst human invention that ruins lives and leaves unhealed wounds for many generations. The wars of the past Millennium had at least some honesty in them in the way that they were declared, and it was clear who was against whom. However, modern-day warfare is different. The informational war is a huge portion of the ongoing war between Ukraine and Russia, with a lot of fake news that manipulates public opinion. The government of Russia has deployed entire networks of TV channels that target Western audiences, and now they are heavily utilizing them to change the world’s opinion about Ukraine, make false claims about it, and convince that Russia is not killing people but “saving them.”

Why Does It Matter To Me?

War has influenced me personally. I am a co-founder of First Bridge – an IT R&D company in Ukraine with over 100 Ukrainian workers. War brought deaths and mass destruction to our country, but we did not give up as a society or as a company. We continued working to support our team and nation.

As the war unfolded on the battlefronts, we witnessed how Russian propaganda was influencing the post-Soviet countries, and many Ukrainian and other journalists began fighting against Russian fakes in these counties. However, we often forget about the propaganda that we see less often – Russian propaganda in the Western English-speaking world, which is very important because it covers a lot more people that influence the political decisions of the Western countries. That’s when we decided that we had to do something to help fight fake news.

How We Decided to Use AI To Fight Against Fakes

One of our in-house development projects for BrandVision, our partner, was the best candidate with the appropriate technology that could be incorporated into a new tool to fight against fake news. It’s a project that helps brands to receive and interact with feedback from their target audience effectively. It uses neural networks for analytics of video, audio, and text content on YouTube, combined with semantic analysis of audio/text content. It was an early-stage startup halfway to MVP.

We wanted to find a way to quickly create a product suitable for the market demand. At the same time, we had an understanding that there is an information war in which analytical technologies play an important role. Seeing a gap in the activity of combating western-oriented English-language information resources, we have chosen one of the elements of our technology to solve this problem.

A Perfect Pivot

And that’s when we made a pivot and turned Brand Vision into Fake Vision – an online tool to fight against military propaganda in the Western English-language media by checking news objectivity using a trainable neural network. This allowed us to build the actual product in 2 months instead of 6 months and gave us the opportunity to help our country. And, as a bonus, due to the situation, there is the possibility of accelerated user acquisition, which helps us and helps the country.

The ability to interact with audio, text, and video content, as well as an emotional response, is achieved through the use of a variety of neural networks that underlie the BrandVision project. This comprehensive expertise was the basis of the Fake Vision project, which is a non-profit initiative that was formed as the result of the invasion of Ukraine. The project’s main objective is to form a simple tool that allows English-speaking users to evaluate news related to events in Ukraine, form an objective vision of events, and tackle propaganda in general.

Fake Vision

Fake Vision is a product that allows users to analyze textual information in terms of truth and lies. Assigning a final score as a percentage from 0 to 100%, where 0 is false, and 100% is true. News sites can be a source of information, for which an individual approach to data collection is built to further conduct neural analysis.

Fake Vision - check the trustworthiness of news about the war between Ukraine and russia

The analysis of a news article consists of two parts: a title and the body of the article. A separate neural network analyzes the headline, and the main part of the article is segmented into paragraphs for a more accurate assessment so that the user can see which part of the content is manipulative. Today’s Fake Vision capabilities are based on article-by-article analysis, which means that one article can be analyzed per user request. Within the framework of the project, a mechanism for continuous training of the neural network has been built, which allows users to assess the objectivity of news, and we take it into account in each round of AI training.

Fake Vision - view already checked articles and their trustworthiness score

As the main classifier for Fake Vision, a recurrent neural network of the Bidirectional Long short-term memory (bi-LSTM) type was chosen. These neural networks are designed to build text classifiers. The general outline of the LSTM network is shown in Figure 1.

Figure 1 - the general outline of the LSTM network

A key step in the classification process is training the classifier. The classifier is trained on a database consisting of matrices and their labels. The label contains information on whether this matrix belongs to fake data or true data. The tags are loaded from the database, and all articles in the database are already marked whether they are true or not. The classifier is trained according to the supervised learning method, which implies the presence of a complete set of labeled data for training the model at all stages of its construction.

The figure shows the general principle of training neural networks. Articles are fed to the neural network, and the output of the neural network y is calculated and compared with the initially given “truth” – z, as a result of which the network error is calculated, and the parameters of the network itself are adjusted according to this error (see Figure 2), in the form of weight coefficients.

Figure 2 - the principle of calculation of the output of the neural network, given the weighted parameters

At the output of the neural network, the data is classified according to the sigmoidal activation function, which has the form shown in Figure 3.

Figure 3 - an illustration of how the information is classified

Current Stage

At the moment, Fake Vision’s neural network is in the process of calibration (training to be as smart as King Solomon), and the team needs the support of the community to train faster. You can become a fake-fighting hero too by going to the Fake Vision websiteand adding the link to the news article of their interest – after that, the neural network will check the trustworthiness and will give a trustworthiness score. Users can then agree or disagree with Fake Vision’s trustworthiness grade for each paragraph and help improve the algorithm.

Fake Vision - take part in improving the algorithm to improve its accuracy


After the neural network trains enough to have near-perfect accuracy, the team plans to expand Fake Vision to be able to check the credibility of any news. AI won’t fix all world issues, but good journalism, paired with AI that can identify fake information, will surely help Ukraine win the war and make this a better place.

From the company’s point of view, war causes challenges, some of which we believe can be addressed. This is a chance for us to be the best versions of ourselves and help ourselves, our country, and our companies.

Leave a Reply

Your email address will not be published.