Show caption Rise of the haters: ‘The internet has amplified the rage by giving people a 24/7 hotline to spurt poisonous views and egg one another on.’ Illustration: Pete Reynolds/The Observer Social media Meet the man who won’t let the haters win Hate speech online has escalated to unprecedented levels. Matthew Williams, a professor of criminology, is shining a scientific light on who is behind it and why Jamie Waters Sun 8 May 2022 11.00 BST Share on Facebook
Share on Twitter
Share via Email
In a slightly creaky, book-filled office at Cardiff University, Matthew Williams pulls up a blood-red graph on his computer. At first glance you might think it referred to stock market fortunes, but when I peer closely, the sad truth behind its jagged peaks becomes clear: it traces the amount of anti-Black hate speech recorded on Twitter in the aftermath of last July’s Euro 2020 final, when England lost to Italy in a nail-biting penalty shootout.
After missed penalties from three Black England players, Marcus Rashford, Jadon Sancho and Bukayo Saka, racist abuse “went through the roof”, says Williams. Within the hour there was an almost 700% increase in hate directed against those players. Half of the 20,000 toxic tweets came from within the UK; the police made 11 arrests for hate crimes, four of which have resulted in prosecutions.
In some ways this is a textbook example of a hate-filled outburst. There was a trigger event and, for many offenders, lots of alcohol and drugs involved. That the abuse trickled off over the following two days is also typical. But the content of the hate directed against the players was new. Alongside familiar racists slurs there was a deluge of primate emojis. Williams calls this a key shift. “We’ve never seen emojis used as features of hate speech in that way – and volume – before.”
It is no coincidence that soaring hate-crime figures are found in countries where the extreme right is rising
Williams would know: he is a professor of criminology specialising in hate crimes and, along with computer scientist Pete Burnap, is the co-founder of the HateLab, a platform that monitors hate across social media in real time. They are on the frontline of internet hate, observing shifts in behaviour and figuring out who is stirring the fury. They pass their insights on to civil rights organisations, governments and big tech firms who use it to inform counter-hate-speech campaigns, ban users and pursue prosecutions.
An all-seeing eye for the racist, misogynistic, homophobic, transphobic venom that humans spit at one another from behind keyboards, Williams has advised everyone from Twitter, Meta (formerly Facebook), Google and TikTok to the Professional Footballers Association, the UK Home Office and the US Department of Justice.
The 45-year-old has also just published a book that investigates the biological and sociological reasons why people commit hate crimes. The Science of Hate asks big, urgent questions: is everyone capable of hate? Is this the most hateful the world has ever been? And, perhaps most importantly, how can we combat it?
In recent years, hate has felt omnipresent. “It is no coincidence that soaring hate-crime figures are found in countries where the extreme right is rising,” writes Williams, adding that “divisive messages from public figures are directly linked to tipping some people into violence on the streets.” The 2016 election of Donald Trump coincided with the biggest rise in hate crimes in the US since 9/11. And the HateLab found there were 1,100 racist attacks committed in the UK as a direct result of the 2016 Brexit referendum result.
In nations with reasonable hate-crime recording standards, such as the UK, US and much of Europe, the data points to an “upward trend,” says Williams. The internet has amplified the rage by giving people a 24/7 hotline to spurt poisonous views and egg one another on. A 2021 survey found that half of all 12- to 15-year-olds in the UK had encountered hateful content online. “Left unchallenged,” he writes, “the expression of hate… has the potential to become more widespread than at any other point in history.”
‘My hope is for more good citizens to stand up to hatred when they see it’: Matthew Williams of Cardiff University, and a graph of online hate speech. Photograph: Francesca Jones/The Observer
Yet Williams is no doomster. With neatly combed hair, a gently lilting Welsh accent and an easy laugh, he is a surprisingly cheery character given his line of work. He does, however, want to awaken us to the fact that “everyone” has prejudices – “You can say you don’t but you’re lying” – and, under the right mix of circumstances, has the potential to slide towards hateful behaviour.
It’s a sobering thought in a woke era in which we’re loath to admit any prejudices. But it’s intended as a rallying cry. Being mindful of our own prejudices helps us keep them in check – and understanding how biases work better equips us to de-fang those who peddle hate, especially online.
“To keep my faith in humanity I have to constantly remind myself that the majority are not hateful,” says Williams. “What I hope for is for more of these good citizens to stand up to hatred when they see it, instead of scrolling or walking on.”
Williams, who is gay, knows full well how damaging a hate crime can be. In 1998, when he was 20, he was beaten up by three men after leaving a gay bar in London. Their punches were punctuated by snickering and homophobic slurs. The attack shook him. “I couldn’t get it out of my head: it filled my thoughts until there was no room for anything else,” he says. Even today, he won’t hold his husband’s hand in public for fear of being targeted. “It’s stayed with me for a long, long time.”
It also shaped his career, prompting the then-aspiring journalist to switch to criminology. A key thrust behind the book was to find out what made his attackers do what they did that day. “There’s this notion that all hate crime offenders are monsters beyond understanding and that if you dare to try to understand them, you humanise them or in some way provide an excuse for their behaviour,” he says. “I wanted to find something that really separated me from my attackers, something that was unique about them – and different from me.”
Instead, his findings showed that he and his attackers were, in all likelihood, “remarkably similar in our biology and psychology”. The common core? An innate human desire to be part of a group and to favour people we perceive to be like us. Such “groupishness” is an evolutionary trait: huddling with others increases our chance of survival. And it means that, from a young age, we instinctively view the world through the lens of our group and other folks: “us” and “them”.
Our behaviour towards “others” occupies a spectrum, with unconscious bias at one end, prejudice in the middle and hate at the other extreme. Whereas prejudiced behaviour means avoiding others, hateful acts seek them out in order to hurt them. There’s often a twisted moral element to hate, too, with offenders claiming that the victim’s group are an affront to their way of life.
‘You can say you don’t but you’re lying’: Professor Matthew Williams wants us to wake up to the fact that everyone has prejudices. Photograph: Francesca Jones/The Observer
Neuroscientific studies suggest bias is mapped on to our brains: the amygdala, a “fast but dumb” threat detector in our temporal lobe, as Williams puts it, often sounds an alarm when it registers someone who is not like us. The smarter prefrontal cortex then overrules it when it realises there is nothing to worry about (most studies on this relate to race). But research suggests certain individuals can develop an oversized amygdala and a weak executive control centre, meaning they instinctively overreact to threats and then have no reasoning function to calm things down. Statistically, hate offenders are most likely to be young men (and in western countries, white). Williams imagines them to be “fearful, angry and powerless”.
Traumatic childhoods often set them apart. Many grew up with abusive or absent parents and experienced personal losses, homelessness, drug addictions or other traumas that left them emotionally unstable. “Childhood scars can thwart psychological development to a point where normal coping mechanisms are either malfunctioning or absent,” he writes.
Williams cites a study from the University of Manchester involving in-depth interviews with 15 young white British men convicted of racial violence. When stressed or triggered, they took out their frustrations on ethnic minorities who they “saw as having less power than them”, he writes. “Race hate provided a convenient home… for their unresolved frustrations from past trauma.” The racial “other” was an easy target: hate almost always involves punching down. (For the most extreme type of hate criminal, a “mission offender”, eliminating other groups is seen as serving a higher purpose. Examples include suicide bombers; the 1999 London nailbomber who targeted gay, black and Bengali communities; and Joseph Paul Franklin, a KKK member who, during the 1970s and 80s, killed more than 20 black people.)
While it’s human nature to classify “us” and “them”, how those lines are drawn is – at least theoretically – not fixed. “Evolution tells us that we’re not inherently racist or biased against a religion or a sexual orientation,” says Williams. That certain groups are consistently targeted for hate is a product of social forces: it is shaped by what we see in the media, what our parents tell us and who we interact with growing up (children who attend mixed-race schools before the age of 12 are less likely to possess race-related bias). And new battle lines are constantly being drawn. Covid has pitted vaxxers and anti-vaxxers against one another, seen a spike in anti-Asian hate crimes and resulted in experts and even the NHS becoming targets of derision.
The unique dynamics of social media can spur people into hateful outbursts. Filter bubbles and algorithms reinforce and deepen prejudices, while anonymous accounts reduce accountability. When sparked by an event – whether political vote, terror attack or football game – some users temporarily lose their ability to suppress their ingrained biases and take to their keyboards as the mask of civility slips. In what’s known as a “cascade” effect, they’re encouraged by the rush of others doing the same, and the perception that “such actions have little or no consequence”, says Williams, who is currently busy monitoring a prolonged wave of hate speech against Russians in the wake of the Ukraine invasion.
The most common target for online hate? Women. “Misogyny is always the most prevalent category on social media: it’s a huge problem,” says Williams. Its rifeness is partly attributable to the fact that women account for half the population (most other hate victims are minorities), so there are lots of targets, and to a culture of misogyny that festers among “incels” and other all-male communities. Misogyny would probably attract the highest number of hate-crime prosecutions – if it were, in fact, a hate crime.
In the UK’s hate crime legislation, sex and gender are not recognised as protected characteristics, so a judge is not compelled to consider misogyny as an aggravating factor in sentencing and police are not required to record it as a hate statistic. Just this February, MPs rejected a proposal to add it as a category due to concerns it could complicate domestic violence and rape prosecutions. “Because there’s no sort of legal framework there, [anti-women online hate] is probably not policed to any great extent – not in the same way anti-black or anti- Muslim or anti-gay rhetoric would be,” says Williams.
Who should be accountable for stamping out hate is a thorny issue. While governments and police should be responsible for dealing with the most serious offences, in many places – including the US but not the UK – hate crimes are woefully under-reported. This is a combination, says Williams, of a lack of police resources (investigating hate crimes requires considerable effort) and lack of trust in police by the most vulnerable members of society.
Online, big tech clearly has a role to play in monitoring content on their sites and banning users who go too far. But sifting through the millions of posts requires vast resources and, as Williams puts it, “There’s no money in stopping hate.” There’s no thriving marketplace for services that track hate speech and coordinate anti-hate campaigns, meaning organisations doing these things are chronically underfunded and rely on support from government and charities to survive (HateLab is mostly funded by the UK’s Economic and Social Research Council and the Alfred Landecker Foundation).
By contrast, “There seems to be more profit in generating hate and keeping people on social media by engaging with it,” says Williams. As is shown by YouTube’s famously addictive algorithm, which draws viewers deeper down a rabbit hole by suggesting ever more extreme content, “hate is sticky”. It’s like not being able to avert your eyes from a car crash. Even so, Williams is optimistic that an anti-hate crusade could bear fruit for businesses concerned with bottom lines. “Can you imagine if you could say your platform was free of hate speech?” he says. “How great would that be for the company – but also the profits!”
Rather than relying on the likes of Mark Zuckerberg, Williams thinks we’re best placed to take on the responsibility ourselves. First, we need to learn the art of neutralising hate speech. Our instinct, says Williams, is “to go in all guns blazing and attack the hate speaker with equally offensive speech sometimes”, which – surprise, surprise – escalates the situation. By contrast, calmly challenging the logic of their claims has proven success.
“If they say, ‘All Muslims are terrorists,’ you say, ‘Hang on a second. Don’t you think that if every Muslim was a terrorist, we’d have a lot more terror attacks right now? Because there’s X many Muslims living in this place.’”
The quick-witted among us will be pleased to hear that humour and parody can be handy, too. “Engaging in a lighthearted way and maybe being a bit sarcastic to highlight the inconsistency of their argument can help – the [hate makers] start to interact a bit more,” he says. And coordinated efforts whereby a group of users are “singing from the same hymn sheet” are far more effective than solo missions (a catchy hashtag can help engender a sense of solidarity). Such things need to be taught in schools, he says. “We don’t have any solid educational guidance on what the best counter speech is.”
Combining this knowledge with an eagle eye, we need to start shutting down bile whenever we see it and taking control of our online spaces. Williams has seen examples of this done with success in the past, such as the “safety pin” campaign in the aftermath of Brexit, whereby “every time someone said something racist, a bunch of people would descend on it using the safety-pin hashtag and standing up for migrants.”
Williams views Wikipedia as an unlikely North Star. “It’s a self-regulating system: people pull up information if it’s false, members of the community flag it and get rid of it. They set their standards of operation.” He’s hopeful that we can turn our online spaces into “more mature” places if counter speakers engage with hate speakers in a sustained, coordinated drive. And en masse. “That, I think, will change how these platforms look.”
It’s not rocket science, but it does require a conscious effort. Start by looking at your own prejudices. Then, armed with a calm mind and perhaps a quip or two, pull up your Twitter, Instagram and TikTok feed. And then, you anti-hate hounds, get sniffing.
The Science of Hate: How Prejudice Becomes Hate and What We Can Do To Stop It by Matthew Williams is published by Faber & Faber at £9.99. Order a copy from the guardianbookshop.com for £9.29