When the coronavirus infodemic strikes

When the coronavirus infodemic strikes

Social media sites are stepping up their efforts in the war against misinformation… specifically, the coronavirus/COVID-19 infodemic. There’s a seemingly endless stream of potentially dangerous misinformation flying around online related to the COVID-19 pandemic, and that could have fatal results.

It’s boomtown in fake-news land riding high on the wave of people being left with their tech devices 24/7. I myself regularly see everything posted online from “hand gel is an immunizer” (nope) and “children can’t be affected” (not true) to “UK rules mean domestic abuse survivors have to stay with their abusive spouse” (absolutely not true at all and hugely dangerous to claim). 

We even have engineers being spat on thanks to 5G conspiracy theories potentially resulting in transmission of coronavirus. Turns out a global pandemic is a lightning rod for pushing people to conspiracy theories galore, to the extent that some folks have to go hunting for guides to wean their family members away from internet fake outs. There are serious consequences taking shape, via every source imaginable—no matter how baffling.

What is being done to tackle these tall tales online?

Youtube: We begin with the video monolith, removing multiple “Coronavirus is caused by 5G” videos (one of which had more than 1.5m views) after an investigation by PressGazette. Some of the other clips about Bill Gates, the media, and related subjects were from big number, verified accounts—often with adverts overlaid from advertisers who didn’t want their promotions associated with said content. While YouTube claims to have removed thousands of videos “since early February,” the video giant and many others are under intense pressure to take things up a notch or two.

While the top search results for “5G coronavirus” in YouTube currently bring back a variety of verified sources debunking the many conspiracy claims, filtering videos by what was posted “today” results in an assortment of freshly uploaded clips of people filming 5G towers and tagging them with “Coronavirus” in the titles. Should you see something specifically pushing a conspiracy theory, the report options are still quite generic:

  • Sexual content
  • Violent or repulsive content
  • Hateful or abusive content
  • Harmful or dangerous acts
  • Spam or misleading

While you’d likely select the last option, there’s still nothing specifically about the pandemic itself. This may be concerning, considering a recent study by BMJ Global Health found that one in four of the most popular videos about the pandemic contained misinformation. What that looks like is 62 million views across 19 dubious videos out of 69 popular videos from one single day. It’s quite concerning.

Twitter: This is an interesting one, as Twitter are looking to flag Tweets and/or accounts pushing bad information in relation to COVID-19. While this is a good move, it appears to be something done entirely at their end; if you try to flag a Tweet yourself as COVID-19 misinformation, there’s no option to do so in the reporting tab. “It’s suspicious or spam” and “It’s abusive or harmful” are the closest, but there’s nothing specific in the follow up options tied to either of those selections.

This feels a bit like a missed opportunity, though there will be reasons why this isn’t available as an option. Perhaps they anticipate false flag and troll reporting of valid data, though one would hope their internal processes for flagging bad content would be able to counteract this possibility.

Facebook: The social media giant came under fire in April for their approach to the misinformation crisis, with large view counts, bad content not flagged as false, and up to 22 days for warnings to be issued, leading one campaign director at a crowdfunded activist group to claim they were “at the epicentre of the misinformation crisis.”

Ouch.

Facebook decided to start notifying users who’d interacted with what can reliably be called “the bad stuff” to try and push back on content rife in groups and elsewhere. Facebook continues to address the problem with multiple initiatives including tackling bad ads, linking people to credible information, and combating rogue data across multiple apps. The sheer size of their user base suggests this fight is far from over, though.

TikTok: Thinking that conspiracy theories and misinformation wouldn’t pop up on viral music/clip sensation TikTok is probably a bad idea. In some cases it’s flourished on the platform away from serious researcher eyes still focused on the big social media platforms such as Twitter and Facebook.

While TikTok is somewhat unique with regards having COVID-19 misinformation as a specific reporting category, it’s not exactly been plain sailing. Popular hashtag categories seemingly have more than their fair share of bad content, tying bad data and poorly sourced claims to cool songs and snappy soundbites.

Internet Archive: Even the Internet Archive isn’t safe from coronavirus shenanigans as people use saved pages to continue spreading bad links online. Even if a bad site is taken down, flagged as harmful, or removed from search engines, the act of scooping it up and placing it on Archive.org forevermore is a way for the people behind the sites to keep pushing those links. For their part, the Internet Archive is fighting  back with clear warning messages on some of the discredited content.

Beware a second Infodemic wave

Although some major online platforms were slow to respond to the bogus information wave, most of them now seem to at least have some sort of game plan in place. It’s debatable how much of it is working, but something is likely better than nothing and tactics continue to evolve in response to those hawking digital shenanigans.

However, it seems at least some warnings of the present so-called Infodemic were not heeded across many years and now we’re reaping the whirlwind. From Governments and healthcare organisations to the general public and online content sharing platforms, we’ve all been caught on the backfoot to various degrees. While the current genie is out of the bottle and won’t be going back in anytime soon, it’s up to all of us to think how we could do it better next time—because there will absolutely be a next time.

ABOUT THE AUTHOR

Christopher Boyd

Former Director of Research at FaceTime Security Labs. He has a very particular set of skills. Skills that make him a nightmare for threats like you.