
In an office in central London, Signify’s small team quietly sifts through thousands of posts.
With every contentious moment during a game, the number of abusive messages rises.
Up pop posts containing monkey emoijs and racist slurs, threats of rape to the family members of managers, and even death threats, sparked purely by actions on a football pitch.
Every message the AI system deems abusive is checked twice by humans, who only count the messages which break social media platforms’ own guidelines as verified abuse.
The most significant surge in abusive posts occurred during Tottenham’s dramatic 2-2 draw with Manchester United on 8 November, a match that featured two stoppage-time goals, after which both clubs’ managers and several players faced concentrated abuse.
In messages seen by the BBC, death threats were sent about Amorim, including one saying ‘Kill Amorim – someone get that dirty Portuguese’.
The Online Safety Act, which came into force in October 2023, puts a statutory duty of care on social media platforms.
That means they are legally obliged to proactively identify and remove illegal content such as threats, harassment or hate speech. Ofcom is now the independent regulator responsible for making sure platforms comply.
But the social platforms argue that the right to free speech makes them reluctant to censor or remove content.
Signify insists that the problem of serious abuse and threats sent online is worsening.
“We’ve seen around a 25% year-on-year increase in the levels of abuse we’re detecting,” said its chief executive Jonathan Hirshler.
“We understand the platforms’ position on free speech but some of the stuff we’re talking about is so egregious.
“Really nasty death threats and really horrible, violent content. If the people who are the free speech absolutists out there read some of those messages, they wouldn’t question why some of these are being reported and why action needs to be taken.”

















