The FBI has started sharing information about online trolls and other suspicious users with top technology companies as part of the bureau’s behind-the-scenes effort to disrupt foreign influence operations aimed at U.S. elections, with officials saying it is the service providers’ responsibility to police malign messaging by Russia and other countries.
“By sharing information with them, especially about who certain users and account holders actually are, we can assist their own, voluntary initiatives to track foreign influence activity and to enforce their own terms of service,” said Adam Dickey, a deputy assistant attorney general.
The information, described as “actionable intelligence,” is funneled through a foreign influence task force FBI Director Christopher Wray set up last fall November as part of a broader government approach to counter foreign influence operations and to prevent a repeat of Russian meddling in the 2018 midterm and the 2020 presidential elections.
The U.S. intelligence community concluded last year that Russia tried to interfere in the 2016 election in part by orchestrating a massive social media campaign aimed at swaying American public opinion and sowing discord.
“Technology companies have a front-line responsibility to secure their own networks, products and platforms,” Wray said. “But we’re doing our part by providing actionable intelligence to better enable them to address abuse of their platforms by foreign actors.”
He said FBI officials have provided top social media and technology companies with several classified briefings so far this year, sharing “specific threat indicators and account information, and a variety of other pieces of information so that they can better monitor their own platforms.”
The task force works with personnel in all 56 FBI field offices and “brings together the FBI’s expertise across the waterfront — counterintelligence, cyber, criminal and even counterterrorism — to root out and respond to foreign influence operations,” Wray said at a White House briefing.
Adam Hickey, a deputy assistant attorney general, said on Monday that the FBI’s unpublicized sharing of information with the social media companies is a “key component” of the Justice Department’s to counter covert foreign influence efforts.
“It is those providers who bear the primary responsibility for securing their own products and platforms,” Hickey said this week at MisinfoCon, an annual conference on misinformation held in Washington, D.C.
The comments come as top U.S. security officials from Director of National Intelligence Dan Coats on down warned about continued attempts by Russia and potentially others to disrupt the November midterm elections.
Coats said on Friday that U.S. intelligence agencies continue “to see a pervasive message campaign” by Russia, while Wray said Moscow “continues to engage in malign influence operations to this day.”
But the officials and social media company executives say the ongoing misinformation campaign does not reach the unprecedented levels seen during the 2016 election.
Hickey, of the Justice Department’s national security division, said that the agency doesn’t often “expose and attribute” ongoing foreign influence operations partly to protect the investigations, methods and sources, and partly “to avoid even the appearance of partiality.”
Social media, technology companies
Social media and technology companies, widely criticized for their role in allowing Russian operatives to use their platforms during the 2016 election, have taken steps over the past year to crack down on misinformation.
In June, Twitter announced new measures to fight abuse and trolls, saying it is focused on “developing machine learning tools that identify and take action on networks of spammy or automated accounts automatically.”
In April, Facebook announced that it had taken down 135 Facebook and Instagram accounts and 138 Facebook pages linked to the Internet Research Agency, a Russian troll farm indicted in February for orchestrating Russia’s social media operations in 2016.
The company did not say whether it had removed the pages and accounts based on information provided by the FBI.
Monika Bickert, head of Facebook’s product policy and counterterrorism, told an audience at the Aspen Security Forum last month that the social network has moved to shield its users against fake information by deploying artificial intelligence tools that detect fake accounts and instituting transparency in advertising requirements.
Tom Burt, vice president for customer security and trust at Microsoft, speaking at the same event, disclosed that the company had worked with law enforcement earlier this year to foil a Russian attempt to hack the campaigns of three candidates running for office in the midterm elections.
He did not identify the candidates by name but said they “were all people who, because of their positions, might have been interesting targets from an espionage standpoint, as well as an election disruption standpoint.”
Democratic Sen. Claire McCaskill of Missouri confirmed late last month that Russian hackers tried unsuccessfully to infiltrate her Senate computer network, raising questions about the extent to which Russia will try to interfere in the 2018 elections.
Wray stressed that the influence operations are not “an election cycle threat.”
“Our adversaries are trying to undermine our country on a persistent and regular basis, whether it’s election season or not,” he said.