Just in time before the Nov. 3 U.S. elections, social media platforms such as Facebook, Twitter, Google, YouTube, TikTok and Pinterest have recently released new policies on how they plan to combat the spread of election and voting misinformation.
The policies include removing or labeling false voting information and claims of election rigging. These companies are grappling with new concepts on how to enforce those measures if the results of the election become unclear for a prolonged period of time.
With the heavy delays in counting the mail-in ballots and the suspension of campaign events due to the ongoing pandemic, social media platforms are now bracing themselves to handle the circulation of news on Election Day and its aftermath, all of which will largely play out virtually.
Misinformation has been spreading across the world’s most popular tech platforms as the coronavirus continues to spread at an alarming rate across the country and the world. These technology companies have taken their own steps to battle the spread of myths, falsehoods and scams, according to MarketWatch.
For the past couple of months, some have criticized social media platforms for not moving quickly enough to address these arising issues that surround the topic of misinformation, hate speech and extremism on these platforms, and they have expressed serious doubt in whether it would be a functional action to enforce these rules and regulations before the November election.
This is not the first time that Facebook has come under fire for not being clear about whether controversial posts made by users should stay up on the site or if they should be removed altogether. Back in 2018, Facebook faced serious backlash for their handling of the spread of misinformation on their platform concerning the Russian interference during the 2016 U.S. presidential election.
There are many users in general who believe that a similar problem will arise again and possibly cause major controversy for the next election. In the recent months leading up to the Nov. 3 vote, Facebook has announced that they plan to start making decisions in regard to important posts related to the U.S. presidential election starting from mid to late October.
In addition to the platform’s call on monitoring posts, Facebook will also give users the option to access content posted by others. If users believe that the posts they are viewing violate Facebook’s policies against hate speech or harmful misinformation, then Facebook will then move forward and remove the specific post that was posted by the original user.
Twitter states that they plan to take charge and be more aggressive to label and remove election-related tweets that are inaccurate and that are not supported by substantial facts. “We will not permit our service to be abused around civic processes, most importantly elections,” Twitter officials said in a blog post.
“Any attempt to do so — both foreign and domestic — will be met with strict enforcement of our rules, which are applied equally and judiciously for everyone.”
The changes that have been made to Twitter’s policies could affect tweets that claim victory before the confirmed election results have been certified, along with misleading posts regarding ballot tampering, according to BBC.
Google will take multiple steps to make sure their search engine is ready to handle the worldwide searches that are going to be made by people across the world. Google will remove any predictions “that could be interpreted as claims for or against any candidate or political party,” the company said in a post on their blog.
They also said that the company would remove statements made about voting methods and requirements, for example, “you can vote by phone” or “you can’t vote by phone.” The popular search engine will remove any misleading information about the election, including results that are not caught by the engine’s automated systems.
In a 2016 investigation conducted by The Wall Street Journal last year, between 0.1% and 0.25% of search queries on Google returned misinformation claims.
A majority of American citizens believe it has become more difficult to spot the difference between factual and false information on social media since 2016. There are very few overall who feel confident enough that big technology companies will prevent the misuse of social media platforms to influence the 2020 elections.
Back in 2016, social media platforms became the hotspot for misinformation regarding the U.S. election between then presidential candidates Donald Trump and Hillary Clinton. Many people feared that multiple fake news articles that had spread across social media may have swayed the results of the election.
There are many who still fear the way they receive their information about the election. With various news sources available, many people get their news in an automatic state, making them more susceptible to false information.
As the 2020 election is quickly approaching, you can develop new habits that will help you better control the way you consume your news, according to the Fast Company.
One tip is to regularly visit trusted news apps and news websites directly. These news organizations produce news. There, you will find a wider range of political information, not just content that’s been sorted for your liking.
Beware of the tendency to believe what is in front of your eyes. Take extra precaution in determining what is accurate or not. Most importantly, treat video content with just as much skepticism as news, text and memes.
Take the time to verify facts in your news with information from a trusted source. Knowing information about local sports teams, healthcare costs, news in their local communities, national politics and international affairs can better help to sort out claims and understand the facts of the story.