The atrocity in New Zealand was carried out by a man fully aware of the power of the internet
The aftermath of the Christchurch atrocity brought significant media coverage of the attempts made by the tech companies, especially Facebook and YouTube, to take down the alleged killer’s video livestream and his so-called manifesto.
These narratives had two subtexts. The first was to impress us with the sheer scale of the task. The second was implicitly to convey the public-spirited dedication of the engineers who worked around the clock to keep these obscenities from infecting the public sphere.
It would be churlish to downplay the scale of the challenge the companies faced, for it was indeed huge. Facebook for example, dealt with 1.5m uploads of the video within 24 hours and claimed to have caught 1.2m of them before they made it into users’ newsfeeds. (That still left 300,000 copies on the loose, though.)
For its part, YouTube found itself on the receiving end of a torrent of video uploads.
In order to deal with the volume, the chief product officer, Neal Mohan, took the decision to override YouTube’s content moderation systems, doing away with human moderators entirely and relying on AI software to immediately identify the most violent parts of the video and autonomously block them. Predictably, this was only partially successful.
There is, however, another way of reading these heroic narratives. The huge numbers are a reminder of the colossal scale at which surveillance capitalism now operates. And the narratives conveniently obscure the fact that the companies’ formidable capability for global dissemination of uploaded content is, as programmers say, a feature, not a bug: it’s what their systems are designed to do.
They enable users to publish whatever they like and to monetise the resulting data trails and “engagement”. It’s obviously a nuisance when some of the uploaded content comes from white supremacist fanatics, but – hey – that’s just the cost of doing the business that surveillance capitalists are in.
New Zealand’s prime minister, Jacinda Ardern, displayed an impressive understanding of the social media strategy of the attacker. Her remedy was to deny him what Margaret Thatcher once called “the oxygen of publicity” by refusing even to utter his name. This is smart because the killer had modelled himself on the Norwegian white supremacist Anders Breivik, who killed 77 people in 2011. “Mr Breivik wanted fame,” writes his biographer in the New York Times. “He wanted his 1,500-page cut-and-paste manifesto to be read widely and he wanted a stage – his trial in Oslo. He called the bomb he set off outside the prime minister’s office in Oslo, and the massacre he carried out on the island of Utøya, his ‘book launch’.”
It looks as though the Christchurch killer had the same idea, except that he was more internet-savvy than Breivik. He displayed a mastery of neo-Nazi internet memes, for example: one of the things that struck Kevin Roose , the New York Times technology reporter, was “how unmistakably online the violence was and how aware the shooter on the video stream appears to have been about how his act would be viewed and interpreted by distinct internet subcultures. In some ways, it felt like a first – an internet-native mass shooting, conceived and produced entirely within the irony-soaked discourse of modern extremism.”
The most worrying thought that comes from immersion in accounts of the tech companies’ struggle against the deluge of uploads is not so much that murderous fanatics seek publicity and notoriety from livestreaming their atrocities on the internet, but that astonishing numbers of other people are not just receptive to their messages, but seem determined to boost and amplify their impact by “sharing” them.
And not just sharing them in the sense of pressing the “share” button. What YouTube engineers found was that the deluge contained lots of copies and clips of the Christchurch video that had been deliberately tweaked so that they would not be detected by the company’s AI systems. A simple way of doing this, it turned out, was to upload a video recording of a computer screen taken from an angle. The content comes over loud and clear, but the automated filter doesn’t recognise it.
That there are perhaps tens – perhaps hundreds – of thousands of people across the world who will do this kind of thing is a really scary discovery. The days are ending when white supremacist and neo-Nazi ideologues could safely be ignored because they belonged to isolated and fragmented groups and were denied the oxygen of publicity. The combination of digital technology and the business model of a few companies has brought them into the mainstream where they are busy normalising racism and poisoning the public sphere. How many more livestreamed atrocities will it take before democratic governments get the message?