© iStock - 50620238

The Rise of Disinformation in OSINT

Due diligence investigators often rely on information available online to make an assessment of their target’s reputation. This means that the ability to distinguish between authentic and inauthentic sources is of critical importance. With the advancement of social media and AI-produced online content, and a fast-growing industry emerging around the creation and dissemination of disinformation, this task is proving increasingly challenging. However, with a heightened awareness, it is possible to distinguish authentic or verified information from stories planted on the internet by malicious actors.

Online disinformation has widely been recognised to endanger democratic processes by spreading false narratives perpetuated by authoritarian regimes. However, it can also hinder the ability of financial crime compliance teams to conduct enhanced due diligence and make accurate judgements about the risks posed by the subjects of investigations.

The picture is complicated by the emergence of an industry around online disinformation, with vendors selling fake news, social media and marketing content aimed at enhancing or besmirching reputations. In an environment where online reputations can be made or destroyed for a couple of hundreds of dollars, it is fundamental that due diligence professionals stay up to date with the latest developments in the field of disinformation proliferation.

Disinformation – Misinformation – Malinformation

The term fake news – famously popularised by former US president Donald Trump – has become ubiquitous. But the phrase is often carelessly applied to encompass a variety of slightly different online phenomena, which can be loosely defined under the categories of misinformation, disinformation and malinformation. Germany’s Heinrich Böll Foundation, in an August 2020 report, noted that “misinformation, disinformation and malinformation are of course not a new phenomenon, but the proliferation of social media has made this issue more urgent”.[1]

But how do we define these phenomena, and which poses the most serious threat?

The Heinrich Böll Foundation highlighted definitions provided by researchers Wardle and Derakhshan in a 2017 report for the Council of Europe. Wardle and Derakhshan clearly distinguish the term “disinformation” from the related concepts of misinformation and malinformation.

  • They describe disinformation as information “that is false and deliberately created to harm a person, social group, organisation or country”. Disinformation can come in the form of a fabricated, manipulated and imposter content, or simply information presented in a false context.
  • By comparison, misinformation is not created with the intent to cause harm. It can be information that makes a false connection or unintentionally misleading content.
  • The third related concept, malinformation, is typically “based on reality” and includes categories such as leaks, online harassment and hate speech.[2]

Disinformation, in particular, has proliferated in the public domain and poses the most significant challenge to due diligence investigators. Possibly the most widely reported large-scale disinformation campaign of recent years related to the 2016 US presidential elections, in which Republican Party candidate Donald Trump beat Democratic Party candidate Hillary Clinton. In 2018, the US Department of Homeland Security concluded that the Russian government had used a blend of traditional and social media, as well as cyber activity, to deliberately “undermine public faith in the US democratic process, denigrate Secretary Clinton, and harm her electability and potential presidency”.[3]

Since then, organisations such as EU Disinfo Lab[4] and the Atlantic Council’s Digital Forensic Research Lab[5] have exposed hundreds of cases of state-sponsored disinformation all around the world.

Disinformation for hire

Inevitably, the very same techniques used by state actors to influence public opinion domestically or abroad have spread to the private sector.  Disinformation campaigns for private or commercial purposes, often referred to as “disinformation as a service” or “disinformation for hire”, are typically much smaller in scale and marketed by vendors for as little as a couple of hundred dollars.

As far back as early 2020, Facebook’s head of cybersecurity policy Nathaniel Gleicher was cited by Buzzfeed as saying: “The broader notion of deception and influence operations has been around for some time, but over the past several years, we have seen […] companies grow up that basically build their business model around deception”.[6]

Research by darknet data provider Darkowl has highlighted the existence of a well-established digital economy that trades social media accounts and even the influencers behind them, some of whom use their profiles to spread messages in return for monetary compensation.

In 2020, Darkowl exposed several disinformation-as-a-service providers on Russian and Ukrainian-speaking internet forums. One such darknet vendor offered both reputation promotion and reputation destruction services. Reputation promotion included positive news articles, websites and YouTube videos and the deletion of negative comments, while “anti-reputation” services included the creation of negative web content or reviews of companies and individuals.[7]

Another darknet vendor identified by Darkowl as an active member of Russian and Ukrainian-speaking darknet forums offered a “blockchain-based botnet” to conduct disinformation campaigns that were able to bypass inbuilt security features (that would delete content identified as fake) for as little as USD 500.

Notably, all the vendors identified by Darkowl appeared to be based in post-Soviet countries in Eastern Europe, which has developed into a hub for such activities. This is likely due to the fact that many private operatives were able to hone their disinformation skills on large-scale projects commissioned by the Russian government. Moreover, the use of negative stories to damage rivals – known as “black PR” – has a long-standing tradition in commercial marketing across the Russian-speaking world.

Not only are vendors offering such services on darknet forums, but the same services can also be found on the regular internet. For example, it takes only a brief search on Yandex (the Russian equivalent of Google) to find websites such as Zakazatcherniypiar.rf (orderblackpr.rf). The website offers “black PR” services as an “effective method to eliminate competitors”.[8]

Quick and easy online campaigns

Besides the apparent rising demand, the growth of the private disinformation-for-sale market can also be explained by the proliferation of online tools that make disinformation campaigns cheap and easy to get off the ground.

The preferred method of spreading disinformation online consists of a combination of publishing custom-made, genuine-looking “news articles” and then promoting them via social media channels.

The online service provider Flippa sells, for as little as USD 250, ready-made news websites and blogs that automatically “scrape” content from other websites.[9] Although the primary customers of vendors such as Flippa likely remain individuals looking to legitimately increase their online revenue by selling advertisements on well-visited websites, there is nothing to prevent malicious actors from purchasing ready-made news websites and using them for the dissemination of disinformation.

Setting up profiles on social media sites such as Facebook or Twitter is even easier and completely free of charge. According to the online database Statista, in the second quarter of 2022 alone, Facebook removed 1.4 billion fake accounts. Statista notes that Facebook considers fake accounts to be those created with malicious intent, or created to represent a business, organisation, or non-human entity.[10] According to an analysis conducted by social media research companies SparkToro and Followerwonk in May 2022, more than 19 percent of all Twitter accounts were assessed to be either fake or spam.[11]

When it comes to producing the written content that underpins disinformation campaigns (such as marketing, “fake” news and reviews), disinformation agents no longer need rely on humans, who are both comparatively expensive and fallible. With the help of a new form of artificial intelligence, known as generative AI, they can create realistic text with just a few sentences of guidance. The most well-known example of generative AI technology is ChatGPT, created by San Francisco-based research firm OpenAI.[12] Recent media reports discussing the risks attached to ChatGPT have warned that its ability to generate authentic-seeming text and content within seconds could facilitate the rapid dissemination of personalised propaganda at little cost.[13]

In September 2019, the cybersecurity company Recorded Future published a detailed report on disinformation in the private sector, describing exactly how disinformation vendors use news sites and social media profiles to spread their narratives online.

For research purposes, the company commissioned two disinformation agents operating in Russian-speaking underground forums to create a disinformation campaign targeting a fictitious company named Tyrell Corporation (created by the researchers) located in a Western country. One of the actors was hired to market Tyrell Corporation by disseminating positive PR, while the other was tasked with doing the opposite.

Both actors proceeded in a similar manner – creating accounts on several social media platforms and gathering followers, before publishing articles in a variety of news media and sharing them via these social media accounts. Both actors provided their “client” with a list of available “news sites” where their articles could be published, ranging from obscure sounding ones such as lovebelfast.co.uk to more legitimate soundings ones such as eveningtimes.co.uk.

Reputation enhancement cost the fictitious Tyrell Corporation USD 1,850, while reputation destruction cost more than double, at USD 4,200.[14] However, the disinformation agent commissioned to damage Tyrell Corporation’s reputation offered several additional “services”, including to file false accusations against the target with law enforcement agencies and spread rumours of these allegations online.

Spotting disinformation

In the above-described scenario, a compliance officer doing due diligence on “Tyrell Corporation” would, in the course of their research, likely come across the false allegations deliberately spread by the disinformation agent. Although disinformation tactics are growing in sophistication, a trained eye can, in most cases, distinguish authentic from inauthentic content.

Telltale signs include low-quality content produced in poorly-written or incomprehensible English (or any other language), information published on websites with obscure domain names, identically-worded content that appears on multiple sites, allegations that are not substantiated or reproduced on any mainstream media sites, and content posted by social media profiles that appear suspiciously anonymous.

Nevertheless, the tactics of disinformation are constantly evolving. Once researchers learn how to recognise a particular technique or method for spreading disinformation, agents adapt and change their methods. A pertinent example is the use of fake profile pictures to make social media accounts appear authentic. The most common method employed by disinformation agents has been simply to steal and reuse images from genuine social media profiles. Fake profiles may also use photographs from stock image databases. However, over time, researchers have become aware of this technique and employed reverse image searches to identify stolen photos.

In response, the more sophisticated disinformation agents leveraged advancements in machine learning and began using AI-generated profile pictures. Since such images do not show real people, they allow scammers to evade reverse image searches.[15] However, tech companies have now developed special software – a simple version of which is available as an extension of the Google Chrome web browser – that can distinguish genuine from AI-generated photos.[16]The “fake image ball” now appears to be back in the scammers’ court.

Moreover, advancements in translation technology and generative AI mean that disinformation campaigns will likely look and feel more sophisticated in the future. A paper published by OpenAI in January 2023 warned that text creation tools powered by language models could soon generate more impactful or persuasive messaging than human propagandists (who often lack requisite linguistic or cultural knowledge of their target). The report also indicated that the tools would make influencing operations more difficult to detect, since they can repeatedly create fresh content without resorting to copy-pasting or other telltale time-saving behaviours employed by human actors.[17]

Outlook

It is imperative that due diligence analysts identify suspicious content. More than that, they need to provide a critical and well-informed assessment of the information and its quality, based on their understanding of the media landscape in the countries in which they operate, and their knowledge of how targeted disinformation campaigns are carried out.

Assessing the reliability of sources and cross-checking the validity of content has always been fundamental to any research methodology. The increased sophistication of disinformation campaigns, fuelled by rapid advancements in technology, have made it even more crucial. A heightened understanding and awareness of the methods employed by disinformation agents will arm due diligence professionals with the skills necessary to spot the hallmarks of targeted campaigns.

In addition, given the call for increasing automation in the compliance space, it is likely that besides dealing with false positives, compliance professionals will be confronted with an increased number of red flags resulting from disinformation campaigns. Only experienced investigators – schooled in the latest disinformation tactics – will be in a position to identify and analyse the quality of information, to ensure thorough risk-based assessments and evidence-based decision-making.

Jennifer Hanley-Giersch / Filip Brokes