With their disastrous handling of misinformation in the lead up to the 2016 election, users lost faith in the once celebrated platforms. Now, with changes made in recent years to address those failures, these companies are hoping to restore that lost respect even as they remain bastions of conspiracy and false narratives. “The more time you spend on these platforms, the more legitimate these messages of propaganda and disinformation are going to seem to you,” said Marc Berkman, CEO of the Organization for Social Media Safety. “Because that’s where you’re investing your time, and where we invest our time becomes the place we invest our trust.”

New Concerns, New Actions

An explosive, ethically dubious story published by the New York Post regarding presidential nominee Vice President Joe Biden’s son, Hunter Biden, began to circulate online on Oct. 14, but due to potential violations regarding accuracy, both Twitter and Facebook independently decided to restrict the spread of the article—barring users from sharing the link—until it was vetted by independent fact-checkers. A rather unusual step, the move is a complete reversal compared to how social media platforms treated content just four short years ago. The quick action by Facebook specifically marked the tech giant’s first deployment of a tool it calls the “viral content review system.” This new tool the company has been developing has been touted as its latest circuit breaker designed to limit false and misleading news in one fell swoop in hopes of fixing the platform’s damaged image post-2016. Deployment of the tool was labeled as a partisan attack by Republican users and lawmakers who have long accused social media platforms of an anti-conservative bias. Facebook stood by its decision citing “hack and leak” operations used by foreign adversaries seeking to feed questionably obtained disinformation to news outlets as a known cybersecurity concern. The previous election cycle was rife with coordinated disinformation campaigns and readily obtainable user information used for political purposes by firms like Cambridge Analytica most famously. Post-election, it caused many—experts, politicians and laymen alike—to rethink the impact of social media platforms as an important, political tool. In the eyes of users, trust for the platforms fell drastically. With less than a week left before Election Day, Facebook is not the only tech company rolling out new tools to increase its information protection protocols. Other social media platforms have long gone into overdrive trying to protect information adopting new strategies to address the outsized influence of their platforms in the wake of 2016 and failures. Tumblr saw a unique presence of chaos agents spreading voter apathy through memes and pro-social justice content, and have since been proactive in curtailing the presence of such accounts sending mass emails to those who engaged with them notifying that they were ran to sow discord by foreign actors and removing such accounts. Earlier this month, Twitter unveiled a change to its popular retweet feature. Changing it from immediate action to a two-step process hoping it causes users to pause and rethink before sharing content with their followers. Meanwhile, Reddit and YouTube have moved to restrict the presence of political ads and trolls. Instagram, owned by Facebook, includes a tag reading “For official resources and updates about the 2020 US Election, visit the Voting Information Center,” on posts that mention either candidate or the election, leading viewers to their new Voting Information Center, the company’s latest attempt to curtail information. Launched in August, Facebook (and Instagram’s) Voting Information Center was designed to help people register to vote while also providing a curated space for election information from officials and verified experts.  

Fact or Fiction

Distinguishing fact from fiction remains as relevant now as it was during 2016. From advocates and government officials to tech leaders and average voters, this seems to be the future for conventional politics going forward. The future is what Berkman is primarily concerned about. Focusing on a myriad of social media related issues, Berkman believes addressing the problems are far removed from new, yet simplistic, enforcement mechanisms. “The failures are systemic. We’ve failed at multiple levels from public policy to education and also technology itself has not kept up. You really need all three of those working together to protect from these dangers,” he said during a phone interview with Lifewire. “The platforms themselves, their incentive is profit and it will always be profit. So, safety will always be a secondary consideration in so much as it supplements the profit-making motive.” Keeping people on the platforms is an important part of the business plan for social media companies. Often making it difficult for the enforcement mechanisms to correctly address issues with users and content as it can be counterintuitive leading to slowed execution. These companies are slow to address content that violates their terms of service, including disinformation, allowing it to accomplish its goal of spreading across online communities before finally being removed. Figures released by the European Commission found companies like Google, Twitter and Facebook in 2019 removed 89 percent of hate content within 24 hours of review, up from 40 percent in 2016. Showing in a post-2016 world, platforms are increasingly taking their role in society more seriously; however, with the viral explosion of conspiracies like Qanon and Pizzagate misinformation seems to flourish. They have gotten better since 2016, but many see their implementation as far from ideal.   “The truth is, we’re a bit in a black hole in terms of whether or not they’ve been successful. We get emails every day from people that contain deep fakes and false stories. There’s clearly been a degree of failure and a democracy cannot function in that environment,” Berkman said.

Above and Beyond

To encroach further, disinformation has gone beyond the narrow digital walls of social media and moved toward more organic, personal paths. The Washington Post recently reported on 11th-hour text and email messages containing false information, threats and long-debunked theories about both Vice President Joe Biden and President Trump in swing states like Florida and Pennsylvania as well as the potential toss-up state Texas. The long-trodden path of Facebook and Twitter has seemingly become stale for disinformation agents as heavy scrutiny has caused many of these channels to adopt—at least superficially—policies combating misleading content. But many are still trying. On Oct. 21, just three weeks before the election, National Intelligence Director John Ratcliffe and FBI Director Christopher Wray announced at a press conference that Russian and Iranian agents hacked local government databases to obtain voter information. “We have already seen Iran sending spoofed emails designed to intimidate voters, incite social unrest and damage President Trump. These actions are desperate attempts by desperate adversaries,” FBI Director Ratcliffe said during the press conference. The emails in question were targeted at Democratic voters under the guise of the far-right group Proud Boys—who recently made headlines during the first presidential debate after President Trump failed to denounce them—reading they “will come after” people if they failed to cast their ballot for Trump with the inclusion of their home address at the bottom of the messages to add an air of legitimacy. To their credit, Facebook was able to uncover a trove of these small, interconnected networks totaling over four dozen fake accounts on both Instagram and Facebook aimed at sowing discord and spreading misinformation regarding the election. One of the accounts was connected to the very hackers behind the threatening emails, head of security at Facebook Nathaniel Gleicher said. “We know these actors are going to keep trying, but I think we are more prepared than we have ever been,” he continued during a call with reporters.

Not Just Technology

Issues not unlike this are why Facebook has made the effort to discontinue political ads in the week leading up to the election. Given their mistakes in 2016, where Ohio State researchers found some 4 percent of Obama voters were dissuaded from voting for Clinton due to belief in fake news stories, the company is revving up its anticipatory policies preparing for a flood of misinformation, disinformation, and conspiracy content from both domestic and foreign provocateurs. Other popular destinations for users like Reddit and Twitter have guardrails in place as well. “This is a real huge problem, even from a cybersecurity perspective. It’s not clear to me how, but it has to start with a combined social and technical solution to have people and platforms accountable and make sure that such devils remain in the bottom,” said Dr. Canetti, director of Reliable Information System and Cyber Security at Boston University. “Either shut down companies or have repercussions for companies that spread disinformation. That’s the only way to give real incentives so this does not happen. Of course, the trade-off is we will not have such a free and nice interface where everyone can act nicely and freely, but maybe this is the price to pay.”  A 2019 study published in the Management Information Systems Quarterly found users in a behavioral experiment were only able to deduce whether a headline was fake news or real only 44 percent of the time. Additionally, new YouGov research found that while 63 percent of users lost trust in social media platforms 22 percent said they use it less citing privacy concerns over the past few years as both privacy and information concerns have become front of mind. Despite the precipitous decline, hope remains as present as ever for Dr. Canetti. There may be additional steps needed for things to be perfect, but in the meantime, the public perception has shifted in important ways that have allowed users to be more discerning. “People are aware. The companies were aware and now they’ve got pressure to do something about it because people were made aware of these failures,” he said. “Awareness and education may be the catalyst for long-term solutions. Being aware that whatever we see may be manipulated and that their interest is not always our interest is more well-known and that allows people to act in ways they did not in 2016.”