DHS Observations on How Disinformation is Promoted, and Pints where it Can Be Shut Down
In the run-up to the US 2020 election, the Department of Homeland Security issued a detailed paper titled Combating Targeted Disinformation Campaigns.
The Following test comes from a section in the paper called the Disinformation Kill Chain. It outlines how disinformation campaigns are often structured, and expanded.
————–
“The “connectedness” of modern society and the free availability of content distribution platforms has
greatly increased the scope, scale, and speed of disinformation campaigns. Disinformation campaigns are
not a new phenomenon. While the scale of attack, scope of impact, and speed of execution of modern
disinformation campaigns have brought new attention to the issue, the fundamental elements of such
campaigns pre-date the internet. The cyber kill chain model64 serves as an inspiration for the following
framework, which outlines the basic structure of these campaign
- Reconnaissance: Analyze target audience and how information flows through the target’s
environment, identify societal fissures to exploit, and design campaign execution plan. - Build: Build campaign infrastructure (computing resources, operational staff, initial accounts,
personas, bots, and websites). Sophisticated threat actors may prepare the environment through
tailored diplomatic, propaganda, and/or official messaging. - Seed: Create fake and/or misleading content, then launch campaign by delivering content to initial
seeding locations such as online forums or social media platforms. Delivering content to multiple
locations using different accounts can create the illusion that there are multiple sources for a story. - Copy: Write articles, blogs, and/or new social media posts referencing the original story. Witting
agents can assist by using their media platforms for seemingly authentic distribution. The copy
phase is a form of “information laundering,” laying the groundwork for amplification by adding
legitimacy to poorly sourced stories. - Amplify: Amplify content by pushing the story into the communication channels of the target
audience. The use of bots and inauthentic accounts help provide momentum, then the content may
be distributed by other witting agents (quasi-legitimate journalists) and unwitting agents (useful
idiots). Successful amplification will result in the content being distributed by authentic voices,
such as the mainstream media, which provides a trending effect and subsequent amplification by
other unwitting agents and the target audience (i.e., now the unwitting audience is spreading
misinformation because they do not know it is false and want to be helpful by informing their
peers). - Control: Control the effect and manipulate the target’s reaction by infiltrating conversations about
the content. Incite conflict and/or strengthen the illusion of consensus by trolling comment sections
of online posts. If a threat actor is accused of propagating disinformation, he or she may deny it
vehemently, offer a counternarrative, and/or accuse an opposing party of planting the story. - Effect: Target actualizes the desired effect, such as voting for a preferred candidate, expressing
behavior against a preferred group, or losing faith in the very idea of truth.news/
A threat actor may skip steps in this process, but doing so can reduce the effectiveness of the campaign and
make it more difficult to mask the identity and objectives of the threat actor. Well-resourced threat actors
may support and enable their campaigns through use of the entire influence toolkit, including economic and
diplomatic activities, public relations, and espionage