Facebook to pause all political advertising—after the election
Tech News
By Admin - October 12, 2020

It seems fair to say that, here in the United States, this is an election season unlike any other, with tensions running exceptionally high. Facebook, which through its collection of apps reaches the vast majority of the US population, has again launched a new slew of initiatives to mitigate the harm misinformation on its platforms can cause. Several of these measures are sound ideas, but unfortunately, two of its latest efforts once again amount to waiting until the horse has made it halfway around the world before you shut the barn door.

Facebook explained yesterday in a corporate blog post what its Election Day efforts are going to look like on both Facebook and Instagram. The company has promised for months that it will run real-time fact-checking on and after November 3 to prevent any candidate from declaring victory before a race is actually called, and it showed what that process will look like.

In that post, Facebook also said that although ads are “an important way to express voice,” it plans to enact a temporary moratorium on “all social issue, electoral, or political ads in the US” after the polls close on November 3, to “reduce opportunities for confusion or abuse.” That stance will put Facebook, at least for the time being, in like with Twitter’s position on political ads.

Too late?

Confusion and abuse, however, are already rampant. For the last year, Facebook has maintained an infamously hands-off stance when it comes to fact-checking political advertising. The platform has occasionally intervened, usually after media pressure, when those ads cross the line of one of its other policies. In June, for example, Facebook pulled a line of Trump campaign ads for using Nazi imagery, and in September it pulled another batch of Trump campaign ads for targeting refugees. Other ads, however—including deceptively manipulated photos and videos of Democratic presidential candidate Joe Biden—have been left alone.

Facebook is also prohibiting content that calls for “militarized” poll-watching and voter intimidation, as the Trump campaign ramps up rhetoric calling for supporters “to go into the polls and watch very carefully.” That policy, however, only applies to new content posted going forward and not to content that has already gone viral.

Unfortunately, such content has already generated millions of views. Donald Trump Jr. in September posted a video to Facebook calling for an “Army for Trump” to defend against alleged widespread election fraud efforts by the Democratic Party. (No evidence of any such effort exists, and while there are documented instances of election fraud in recent years, it is extremely rare.)

“When we change our policies, we generally do not apply them retroactively,” Facebook executive Monika Bickert said. “Under the new policy, if that video were to be posted again, we would indeed be removing it.”

Concerns about a rise in violence leading up to the election are, sadly, not unfounded. Just this morning, for example, the FBI announced it had intercepted a plot by five Michigan men and one Delawarean to kidnap Michigan Gov. Gretchen Whitmer and “overthrow” the state government, which was in part coordinated on Facebook and other social media platforms.

Coordinated response

It seems clear that social media platforms, acting alone, cannot sufficiently address the threat of coordinated disinformation. Facebook today said as much when outlining a set of proposals for new regulation or legislation that would apply to both it and other social platforms.

“If malicious actors coordinate off our platforms, we may not identify that collaboration,” Facebook head of security policy Nathaniel Gleicher wrote. “If they run campaigns that rely on independent websites and target journalists and traditional media, we and other technology companies will be limited to taking action on the components of such campaigns we see on our platforms. We know malicious actors are in fact doing both of these things… There is a clear need for a strong collective response that imposes a higher cost on people behind influence operations in addition to making these deceptive campaigns less effective.”

Today, for example, Facebook said it removed 200 fake accounts that were tied to a marketing firm that worked on behalf of two US conservative political action groups, Turning Point USA and Inclusive Conservation Group. The marketing firm is now banned from Facebook, but clearly most of the coordination its employees did would have taken place using tools other than Facebook, and its networks of fake accounts and disinformation may still be active on other platforms.

“Regulations can be powerful tools in our collective response to these deceptive campaigns,” Facebook said, recommending seven key principles. Chief among them are transparency and collaboration: companies should agree to share threat signals across “platforms, civil society, and government, while protecting the privacy of innocent users who may be swept up in these campaigns,” Facebook suggested.

But Facebook also wants help from the law in the form of actual consequences for conducting certain kinds of influence operations. The company is asking regulators to “impose economic, diplomatic and/or criminal penalties” on the entities that organize these disinformation campaigns.