War of weaponized influence’: U.S. spending millions on tools to find foreign tweets and memes

The U.S. government is building expensive new tools to fight enemies that use tweets and memes instead of bullets and bombs.

Instead of the deserts and mountains of Afghanistan, the battleground is on tech platforms like Twitter, where the Department of Defense’s research and development arm says America is fighting an “asymmetric, continual, war of weaponized influence narratives.”

The Defense Advanced Research Projects Agency said it plans to spend $59.5 million in the next four years on researchers making algorithms and gathering content including tweets, memes, political ads and blog posts for the government’s “Influence Campaign Awareness and Sensemaking” (INCAS) program.

INCAS program manager Brian Kettler said the goal is to give the government tools for use in providing an “early warning” of foreign influence. He said the tools may also be useful for agencies working to help the U.S. spread its own narratives around the world and to stop content from going viral online.

For example, researchers for cybersecurity company Mandiant said last month they discovered a pro-China digital influence campaign designed to stoke Asian American anger over racial injustice inside the U.S. and shift the narrative about blame for COVID-19.

The Mandiant team said the pro-China operation used at least 30 social media platforms, plus dozens of websites in languages including Chinese, English, German, Japanese, Korean, Russian and Spanish.

Mandiant Vice President John Hultquist said DARPA is smart to pursue its program tackling foreign influence, but he said it is already almost too late for solutions as China has taken action to push protesters into the streets.

“I think this is a massive problem that the U.S. is going to have to deal with so I think trying to develop a capability like this is absolutely the right thing to do,” Mr. Hultquist said. “It’s only growing.”

Details about who will use the tools, and how they will be used, are not known because the tools do not yet exist — DARPA said the INCAS program just started in August. Mr. Kettler emphasized that DARPA is in charge of building tools and techniques, not deciding how they are used by the U.S. government.

Researchers working for DARPA are divided into five teams, according to DARPA documents. One team will collect foreign persons’ digital content, such as tweets and memes.

Two teams will take the content and develop algorithms to sort the content for indicators of foreign influence and for an intended audience’s “psychographic attributes,” which involves a community’s sacred values and worldviews, such as politics and religion.

Another team will take the algorithms and data to build a model for understanding foreign influence campaigns, and for helping analysts who will use the tools to have confidence that the algorithms picked up a particular foe, such as China or Russia, and not some random person online. This team will provide “human machine interfaces” that analysts will use to interact with the INCAS program’s tools.

A final team will take what all of the others come up with and test it, including by working with government agencies on real-world scenarios.

DARPA is not especially forthcoming about the type of foreign influence assault it wants to prevent. Its documents cite “China’s Belt and Road Initiative,” an infrastructure funding effort run by the Chinese communists, as part of a scenario to be examined in measuring the program’s function.

DARPA documents also point to adversaries seeking to undermine foreign people’s attitudes about American military bases inside their countries. But the government’s broadest fear is that an enemy will use foreign influence to accomplish something that the U.S. has not considered.

“Quite a few people, quite a bit of people now, are worried about election security because we’ve seen some examples where that’s been concerning, but I want to cast the net much broader and say, “Can we see adversary information campaigns around a variety of topics?’” Mr. Kettler said. “What haven’t we thought of?”

While DARPA documents and Mr. Kettler stressed that the tools are focused on detection, University of Illinois researchers involved in the program say their work will be the “first step towards development of effective countermeasures” against influence operations.

Such countermeasures may involve ways to preempt a foe’s messages from sticking in people’s minds, or stopping messages from going viral, according to Mr. Kettler, who said that his program was not devising a system for how countermeasures could be developed.

The University of Illinois said last month it received $5.8 million from DARPA for the INCAS program. DARPA said the Illinois researchers would work with counterparts at the University of Southern California to develop ways to sort an audience by psychographic attributes.

“People create narratives that are divisive,” said Tarek Abdelzaher, a University of Illinois professor working with DARPA, in a statement last month. “They are intended to polarize, radicalize, whatever … but we don’t understand the impact of that weapon on the population the way we understand the impact of a bomb or lightning strike.”

Researchers’ focus on psychographic attributes is aimed at identifying which groups of people are susceptible to foreign messages in a way that sparks an emotional response, such as outrage against the U.S. government, or that prompts real-world action, such as protests.

While marketers are concerned with the personality traits of an individual that determines whether someone is likely to buy a product, DARPA is concerned with groups’ emotional responses to content based on their values and worldviews stemming from religious or political beliefs, according to DARPA presentation slides and a broad agency announcement both from October 2020.

DARPA’s presentation slides showed marketers using beauty ads to sell makeup differently to introverts and extroverts.

By contrast, DARPA’s new approach at dividing people was depicted in a graph plotting an audience’s risk of emotional response to an issue such as “guns/gun control,” “climate change,” “gays in military,” and “immigration” among other things.

Mr. Kettler said DARPA wanted to avoid duplicating the private sector’s tools built for marketing, and DARPA intended to make its forthcoming psychographic algorithms available for widespread use.

“The various algorithms for analyzing, for looking at aggregate social media data and saying, ‘there seems to be these psychographic attributes involved,’ that’s the kind of stuff that we hope to be making available and that could be incorporated into a variety of different kinds of tools by different commercial customers, academic researchers, so forth, could take advantage of those tools,” Mr. Kettler said.  

DARPA is relying on an array of companies, institutions and labs to develop the government’s capabilities. Alongside the Universities of Illinois and Southern California researchers, other research teams participating include representatives from Smart Information Flow Technologies, Protagonist Technology, Uncharted Software, University of Maryland Applied Research Laboratory for Intelligence and Security, and Lockheed Martin Advanced Technology Laboratories. Mr. Kettler joined DARPA in 2019 from Lockheed Martin.

The government agencies that may use these tools have not yet been selected and are expected to emerge as the program unfolds. Mr. Kettler said the tools are not singularly for the Department of Defense, as he noted that the “entire government” has a concern about foreign influence and he touted the State Department as particularly focused on the issue.

Sign up for Daily Newsletters


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *