An investigation by Twitter into racist tweets levied against three Black players on the English football team following the national hopefuls’ loss against Italy last month revealed that anonymity played almost no role in whether users posted abusive comments from their accounts.

The analysis, which revealed that 99 percent of the accounts that Twitter suspended were not anonymous, provides the latest evidence that requiring real identities on social media platforms will not lead to any measurable decrease in online abuse.

“While we have always welcomed the opportunity to hear ideas from partners on what will help, including from within the football community, our data suggests that ID verification would have been unlikely to prevent the abuse from happening – as the accounts we suspended themselves were not anonymous,” Twitter UK wrote in a blog post. “Of the permanently suspended accounts from the Tournament, 99% of account owners were identifiable.”

According to Twitter, its own automated tools to find and remove abusive content are working: The company’s internal tech tools found and removed 1,662 harmful tweets during the UEFA Euro 2020 Final and in the 24 hours following the match. By July 14—three days after the final—that number grew to 1,961, though the total included 126 tweets that were removed due to non-automated reporting by “trusted partners,” Twitter said.

The racism directed against England’s players drew immediate attention after the team’s loss in one of the most anticipated football matches in the country’s recent history. The match closed with a 1 – 1 tie, and three of England’s players missed penalty kicks in the ensuing penalty shoot-out to decide the match.

According to Vice, the three penalty kickers were called racist slurs on Instagram, faced racist comments on Twitter, and received “direct threats to their safety, in far-right and neo-Nazi channels” on Telegram.

The proposed solutions to this type of abuse are as old as the abuse itself. As we discussed on the Lock and Code podcast with Electronic Frontier Foundation Director of Cybersecurity Eva Galperin, commentators often suggest that social media companies require a person to provide their real identity when creating an account and using a platform.


This video cannot be displayed because your Functional Cookies are currently disabled.

To enable them, please visit our privacy policy and search for the Cookies section. Select “Click Here” to open the Privacy Preference Center and select “Functional Cookies” in the menu. You can switch the tab back to “Active” or disable by moving the tab to “Inactive.” Click “Save Settings.”

“The premise is that if people used their real names that they would not post this kind of harassing content,” Galperin said. “That if your name was next to every opinion you had, that you would be more careful about the things that you say online.”

But, Galperin said, the premise falls apart when looking at the real world.

“This assumes a level of shame that is simply not there,” Galperin said. “People are willing to be tremendous jerks online. And the more powerful that they are offline, the more likely it is that they will act as bullies online and that they will put their names next to it and feel no shame whatsoever.”

Now, after decades of this dynamic being recognized by online privacy experts, it appears that Twitter—armed with its own data—has joined the crowd that says that, thankfully, anonymity is not worth destroying.