By ERIC TUCKER and BARBARA ORTUTAY
By ERIC TUCKER and BARBARA ORTUTAY
The Trump campaign spent more than $17,000 on the ads for Trump and Pence combined. The ads began running on Wednesday and received hundreds of thousands of impressions.
In a statement, Trump campaign communications director Tim Murtaugh said the inverted red triangle was a symbol commonly used by antifa so it was included in an ad about antifa. He said the symbol is not in the Anti-Defamation League’s database of symbols of hate. The Trump campaign also argued that the symbol is an emoji.
“But it is ironic that it took a Trump ad to force the media to implicitly concede that Antifa is a hate group," he added.
Antifa is an umbrella term for leftist militants bound more by belief than organizational structure. Trump has blamed antifa for the violence that erupted during some of the recent protests, but federal law enforcement officials have offered little evidence of this.
Some experts disputed that the red triangle was commonly used as an antifa symbol.
European anti-fascist groups initially used the red triangle as a symbol, hoping to reclaim its meaning after World War II, but it is no longer widely used by the movement nor by U.S. antifa groups, said Mark Bray, a Rutgers University historian and author of “Antifa: The Anti-Fascist Handbook.”
The ADL said the triangle was not in its database because it is a historical symbol and the database includes only those symbols used by modern-day extremists and white supremacists.
“Whether aware of the history or meaning, for the Trump campaign to use a symbol — one which is practically identical to that used by the Nazi regime to classify political prisoners in concentration camps — to attack his opponents is offensive and deeply troubling,” ADL chief executive officer Jonathan Greenblatt said in a statement.
Even with the ads removed, Facebook and CEO Mark Zuckerberg, still face persistent criticism for not removing or labeling earlier posts by Trump that spread misinformation about voting by mail and, many said, encouraged violence against protesters during recent unrest in American cities.
Those questions arose anew during Thursday’s hearing as Democrats pressed the executives about what moral obligations they felt they had when it came to content and about decisions they’ve made to remove, label or leave up false or incendiary posts.
Facebook, for instance, was asked why it did not swiftly remove a doctored video of House Speaker Nancy Pelosi, D-Calif., last year that appeared to show her slurring her words.
“If we simply take a piece of content like this down, it doesn’t go away,” Gleicher responded. “It will exist elsewhere on the internet. People who are looking for it will still find it.”
Later Thursday, Twitter labeled a video Trump had posted as “manipulated media.” The president had tweeted a doctored video of two young children with a fake, misspelled CNN headline of “Terrified todler runs from racist baby.” For the first time last month, Twitter began flagging some of Trump’s tweets with a fact-check warning.
With Thursday’s hearing focused on the spread of disinformation tied to the 2020 election, the companies said they had not yet seen the same sort of concerted foreign influence campaigns like the one four years ago, when Russian sowed discord online by playing up divisive social issues.
But that suggests the threat has simply evolved rather than diminished, said the executives, who pointed out that media entities linked to foreign governments were now directly engaging online on American social issues as a way to influence public opinion. Chinese actors, for instance, have likened allegations of police brutality in the U.S. to the criticism China faced for its aggressive treatment of protesters in Hong Kong.
“That shift from platform manipulation to overt state assets is something that we’ve observed," said Nick Pickles, Twitter’s public policy strategy and development director.
The companies say they have accelerated efforts to root out fake accounts. Twitter, for instance, said it had challenged in the first six months of 2019 more than 97 million accounts that showed signs of platform manipulation, and Facebook said it had disabled about 1.7 billion fake accounts between January and March.
Preventing disinformation ahead of the election is a significant challenge in a country facing potentially dramatic changes in how people vote, with the expected widespread use of mail-in ballots creating openings to cast doubt on the results and spread inaccurate narratives.
Facebook said Thursday that it is working to provide Americans with accurate information about the vote-by-mail process, with notifications to users about how to request ballots and about whether the date of their election has changed. The outreach is targeted to voters in states where no excuse is needed to vote by mail or where fears of the coronavirus are accepted as an excuse.
“Providing that accurate information is one of the best ways to mitigate those types of threats,” Gleicher said.
Associated Press writer Amanda Seitz contributed to this report.
Copyright 2020 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.