New Zealand mosque shootings: Are social media companies unwitting accomplices? – USA TODAY
Los Angeles Mayor Eric Garcetti met with members of The Islamic Center of Southern California to show solidarity after the mosque shootings in New Zealand.
Harrison Hill, USA TODAY
Tough questions are being asked about the role of social media in the wake of the horrific shooting that took the lives of at least 49 people at two New Zealand mosques. Sadly, tough questions with no easy answers.
The 28-year-old alleged white supremacist gunman not only livestreamed the rampage via helmet-cam on Facebook and Twitter, but footage of the massacre circulated even hours after the shooting, despite the frantic efforts by Facebook, YouTube, Twitter and Reddit to take it down as quickly as possible, each of which issued the requisite statements condemning the terror, and each of which have codes of conduct that are sometimes violated.
New Zealand mosque shootings: How U.S. racism might be fueling hate around the world
Ahead of the attack, the shooter posted a since removed hateful 74-page manifesto on Twitter.
And during the killing, he apparently referenced divisive YouTube star PewDiePie, who for the record subsequently tweeted, âI feel absolutely sickened having my name uttered by this person.â
“TheÂ attack on New Zealand Muslims today is a shocking and disgraceful actÂ of terror,â saidÂ David Ibsen, executive director of the non-profit, non-partisanÂ Counter Extremism ProjectÂ (CEP)Â global policy organization.Â âOnceÂ again, it has been committed by an extremistÂ aided, abetted and coaxed intoÂ action by content on socialÂ media. This poses once more the question of online radicalization.â
Mia GarlickÂ fromÂ Facebook New ZealandÂ issuedÂ a statement Friday, indicatingÂ that, “since the attack happened, teams from across Facebook have been working around the clock to respond to reports and block content, proactively identify content which violates our standards and to support first responders and law enforcement. We are adding each video we find to an internal data base which enables us to detect and automatically remove copies of the videos when uploaded again. We urge people to report all instances to us so our systems can block the video from being shared again.âÂ
In its own statement, YouTubeÂ said that “shocking, violent and graphic content has no place on our platforms, and we are employing our technology and humanÂ resources to quickly review and remove any and all such violative content on YouTube.Â As with any major tragedy, we will workÂ cooperatively with the authorities.âÂ
Twitter echoed similar sentiments: “Twitter has rigorous processes and aÂ dedicated team in place for managing exigent and emergency situations such as this. We also
cooperateÂ with law enforcement to facilitate their investigations as required.”
Of course, not all social media companies are created equal.
One of the difficulties in tackling such issues, says UCLA assistant professor Sarah T. Roberts, is it is âsomewhat about apples and oranges when we talk about mainstream commercial platforms in the same breath as some of the more esoteric, disturbing corners of the internet, both of which are implicated in this case. This person had a presence across a number of different kinds of sites. The approaches and the orientation to dealing with hate speech, incitement to violence, terroristic materials, differs in these places.â
‘I was the last person to get out alive': Narrow escape from the New Zealand mosque
Christchurch mosque attacks: Mass shootings are rare in New Zealand
Even at that, Roberts is critical of the mainstream playersÂ including YouTube, Twitter and Facebook, who she says âhave not really taken these issues to heart until fairly recently. If we want to think about metaphors, itâs trying to close the barn door after the horses have escaped in essence.â
Whatâs more, âthe problem of locating, isolating and removing such content is an ongoing one, so even if we stipulate that OK itâs somehow very easy to know what constitutes hate speech and we can find it â which I donât think we can assume â then you have the mechanisms to do the removal. That often falls to very low paid, low-status people called content moderators who do the deletion.â
Crossing the line to hate speech
Deciding what on these platforms constitutes speech that crosses the line and what doesnât can poseÂ a major challenge as it is often far more nuanced than outright hate speech inciting violence.
âThe companies have tried as hard as they can to not be in the business of being the arbiters of content. And yet in 2019, they find themselves squarely in that practice, where they never wanted to be,â Roberts says.
Moreover, on a much smaller scale, there may be a balancing act when a personÂ livestreams, say, police stops that result in shootings, not to glorify the event, but to provide accountability and visual evidence.Â
Tech companies are also deploying artificial intelligence and machine learning to get at the problem.Â
For example, in the fourth quarter of 2018, 70 percent of video removed off YouTube were first flaggedÂ by smart detection machine systems, many of which had less than 10 views.
But Hany Farid, a professor of digital forensics at UC Berkeley and an advisor to the CEP,Â thinks such systems have a long way to go.
âDespite (Mark) Zuckerbergâs promises that AI will save us, these systems are not nearly good enough to contend with the enormous volume of content uploaded every day,” he says.
To illustrate this point, Facebookâs CTO was recently bragging about how sophisticated their AI system is by talking about its ability to distinguish between images of broccoli and marijuana. The overall accuracy of this fairly mundane task is around 90 percent.â
The reality, Farid adds,Â is that “Facebook and others have grown to their current monstrous scale without putting guard rails in place to deal with what was predictable harm.Â Now they have the unbearably difficult problem of going back and trying to retrofit a systemÂ to deal with what is a spectacular array of troubling content, from child sexual abuse, terrorism, hate speech, the sale of illegal and deadly drugs, mis-information, and on and on.â
While everyone agrees that technology cannot predict the future and thus unthinkable violent acts, safeguards might make it easier to pull down “live” content much better and faster than appears to be the caseÂ in the aftermath of the New Zealand shooting.
“BuzzFeed News” tech reporter Ryan Mac tweeted, âDespite Twitter’s earlier commitment to taking down the video I’m still seeing clips, including one shared from a verified account with 694K followers. I’m not sharing it here, but it’s been up for two hours.â
Jennifer M. Grygiel, an assistant professor of communications at S.I. Newhouse School of Public Communications at Syracuse, also had no trouble accessing clips well after the shooting. âIn the case of live-steaming, we need a delay for youth, aged 13-18 on platforms, so that children are not serving as Facebook content moderators for massacres.â Something like the TV networks do when they broadcast live shows.
Â “Corporations don’t get to be ‘deeply saddened,'” Grygiel tweeted.Â “Fix your problem.”
Of course, the voyeuristic element to cyberspace means that some people will seek out even the most disturbing footage. After the shooting, a video sharing site similar to YouTube called LiveLink.com was trending online. The site describes itself as being âfree as possibleâ while prohibiting certain types of videos, including ones showing pornography, illegal activity, or content âwhich we deem to be the glorification of graphic violence or graphic content.â
As of Friday morning, some parts of the LiveLeak website appeared down, including the ability to search for videos.
Email:Â email@example.com; Follow USA TODAY @edbaig on Twitter