Part 4 – Key Mitigation Step – Using AI to Recognize and Flag Disinformation

Earlier articles in this series examined how artificial intelligence is being leveraged to generate, share, and amplify propaganda. But it also can be used as a tool to recognize such activity and to make suggestions toward mitigation.

When false information is generated, it’s not always possible to know who created it, But the information still leaves multiple types of data in its wake. This data can be evaluated and acted upon.

This article is part of our In Depth Series – Disinformation For Hire 

Most propaganda has a starting point. Information can be gleaned from where the propaganda first appears. Messages can be evaluated for location, affiliations with other entities and a review of the site’s history. From there, information sometimes can be discovered about funding sources and systems that may have assisted in the propaganda distribution effort.

As the figure above illustrates, when false news is discovered, there are details to be evaluated related both to the creator/disseminator of the message and from the news content itself. Each branch has multiple other points to be evaluated.

  • Creator/Dissemination – Do the news articles seem written by people? Or are they AI generated? What details are available from each path? Who are the targets? Are the perpetrators targeting their audience by specific vulnerability (young or old, educated or not), or by the type of media platform they use?
  • News Content – What are the message mechanics? What is the intent? What document parts, such as headlines, article text, tagging and more, can be reviewed to glean additional details? Next, what is the context of the message within a larger community that might receive it? Is the information a one-to-many broadcast? Or is it meant to generate discussion and pass-along links on social media. Is the dissemination being aided by message-posting bots? Details on any of these can help analysts discover more about who is creating the propaganda.

Figure 1 shows some of the multiple points that can be evaluated to help detect false news. If any of these are missing, unknown, or faked in a way that does not match well with the rest of the news content, it could be a hint that the claimed news is actually disinformation.

Figure1

 

Using AI to Make Content Decisions

In Figure 1, above, we highlighted potential review points where artificial intelligence systems can seek more details about a propaganda message.

Next, Figure 2, below, takes the review a step further to highlight some of the specific things AI systems might seek. For example, a real news story tries to answer the “Five Ws and an H.” That means real news boils down to Who, What, When, Where, Why and How. When articles are propaganda masquerading as news, many of these details may be missing. References in false news are often vague to the point where they can’t be fact-checked. For example, a propaganda story might mention “a man in the Midwest” rather than the more specific Joe Smith of 123 Main Street, Kansas City. The date may be “recently,” rather than something more specific.

AI can seek and identify specific flags which indicate the news is manufactured. Figure 2 lists some examples.

  • Five Ws – The AI system can look not only for the Five Ws, but it can check for supporting evidence. Does the article quote any police reports where information can be separately confirmed? Court filings? Other things to support their claim? If no, the news could be dubious.
  • Domain Review – How long has the news site been around? Does it have local or foreign registration? Is it part of a larger network? In turn, is the information classifiable? Does it use language classified as hate speech? Is it bias or misleading?
  • Sensationalism – Is the article designed to be inflammatory, or to force people to take a specific side? Does it purposely contort facts? In turn, does it downplay scientific evidence in favor of emotional responses? Does it call for violence? Does it specifically call for either action or inaction as part of its report? If so, the article is less about actual news and more focused on indoctrination. AI review can send alerts on problem news, especially if it tries to incite violence.
  • Special appeals – if the article spews insults or tries to link the reader’s loyalty to some other issue, it’s immediately suspect. In turn, such appeals often attempt to focus a political bias or to urge the reader toward a misleading conclusion.

 

Running analytics and AI content analysis against news articles can help spark deeper dives into a site’s focus, ownership and history. For new (pop-up) new sites, analysts may want to study and rank inbound traffic. If the traffic volume is low, but the site stays in operation, chances are the site is not there to make money from traffic and advertising. It’s there to host and promote false information.

Likewise, if the inbound traffic goes to just a handful of pages on a larger site, and if those pages contain dubious information, chances are bots are linking and driving traffic to just these stories. It’s quite possible the whole fake news site was set up to disguise that fact that the eyeballs of viewers are being sent to just a few pages.

Because of the vast quantity of (potentially) false information that needs to be analyzed, and because of the huge number of sites that must be tracked, AI-enhanced fact-checking and review of site metadata is a key path forward when fighting this battle. Machine learning, augmented by human supervision and system training, can help recognize and combat propaganda while also targeting global dissemination networks and botnet amplifiers.