The recent deadly racist attack in Buffalo, NY, planned with tactical advice from online chat groups, is sparking calls in Canada and beyond for surveillance. better internet content. However, civil liberties activists say that trying to effectively disinfect a website with hateful or violent content is logistically difficult.
The massacre at Tops supermarket left 10 people dead and 3 injured. Officials believe the attack was a racially motivated hate crime.
An online cache of disturbing posts shows the alleged Buffalo shooter seeking advice from like-minded people on heavily moderated online discussion forums.
The shooting once again raises questions about how effectively social media platforms can respond to threatening content while preserving freedom of expression online.
The alleged shooter Buffalo also discussed the specifics of launching the attack on the online platform Discord. It allows users to create private channels that can only be accessed through invitation, but the site also hosts public channels that anyone can join.
‘What ammo will take down armor?’
On Discord, the suspect posted a two-year-old diary detailing a racist manifesto inspired by the Christchurch perpetrator’s manifesto. There are also detailed plans for carrying out an attack.
The alleged shooter Buffalo posted notices asking for advice on tactical gear, like armor and bulletproof vests, weapons to use, and where to access certain ammunition. “Is there a Discord that’s primarily about tactical gear?” a post from August 2020 reads. “And what kind of ammo will take down armor?”
Alternative media agency Unicorn Riot discovered web posts that appeared to be related to the Buffalo suspect and shared the content with CBC News. The network will not repost the most disturbing and racist material contained in the posts.
‘Canada is not immune,’ the top Black vocalist said before the Buffalo mass shooting
The ‘Great Alternative’ Conspiracy Unified White Supremes Long Before Buffalo, NY, Opened Fire
In addition to asking for specific advice on conducting a mass shooting, the suspect live-streamed the attack on Twitch, an Amazon-owned platform commonly used to stream video games. Twitch removed the video within two minutes of the violence starting.
But the post was retweeted online, going viral on platforms like Facebook and Twitter.
Amarnath Amarasingam, a professor at Queen’s University and an expert on extremism and online communities, said diary entries uploaded by the suspect revealed that Discord had flagged one of his posts. him when he tried to upload the Christchurch shooter manifesto, but the platform did nothing to track .
“If they were interested in seeing his diary, it would be obvious that he was planning an attack because he spoke frankly and openly from the beginning,” Amarasingam said.
“In the long list of red flags that have been ignored, you can also add this.”
‘Hate has no place on Discord’
In an email to CBC News, Discord issued a response to the attack. “Our deepest sympathies go out to the victims and their families,” a company spokesperson wrote. “Hate has no place on Discord and we are committed to fighting violence and extremism.”
Discord said to the best of its knowledge, the alleged shooter maintains “a private, invite-only server … to serve as a personal log chat log.” But about 30 minutes before the attack, “a small group of people were invited to come and join the server.”
Moderating this type of content effectively and quickly is no easy feat. Last year, the Liberals proposed a bill that received much criticism for failing to strike the right balance between online privacy and safety.
Cara Zwibel at the Canadian Civil Liberties Union said in a statement to CBC: “Regulation needs to be well thought out and nuanced, acknowledging how important the right to free speech is to a society. democracy”. “A government that believes it can eliminate online hate or desecrate the internet by imposing strict takedown requirements on platforms is engaged in a losing battle”
“Governments should focus on requiring platforms to be more transparent about how they deal with these problems, and especially about the tools and methods they use to amplify, promote and earn money.” money from some kind of online expression,” Zwibel said.
SEE: Black Canadians Respond to Buffalo Mass Shooting:
During the 2021 federal election campaign, the Liberals promised to enact new legislation within the first 100 days of their mandate “to combat seriously harmful forms of online content, namely hate speech, terrorist content, content that incites violence, child sexual abuse material, and the non-consensual distribution of intimate images.”
They pledge to “ensure that social media platforms and other online services are held accountable for the content they host.” The move is in part in response to a hostile motivated attack on a mosque in Quebec City in 2017 and the deadly truck attack in London, Ont., in June 2021.
Research shows Canadians are most active in right-wing extremism online
While the government missed the 100-day mark in early February, it established an expert panel to make recommendations to Heritage Secretary Pablo Rodriguez. Their findings will inform policy governing social media platforms.
“What happens online doesn’t stay online,” says Rodriguez. “Online violence is real violence and we have to address that.”
Amarasingam is part of that group of experts.
“All of that needs to be subject to some kind of law that forces some of these platforms to think about the risks built into their service so they can think about how to prevent it,” Amarasingam said. Amarasingam said.
New Zealand’s reaction
New Zealand faced a similar challenge in 2019 when the Christchurch gunman live-streamed his attack and posted his manifesto online. Authorities took steps in 2019 to ban that video from the public. The country’s chief censor has also classified Buffalo’s videos, diaries and manifestos as “deserving of disapproval”, as the attack was inspired by those in Christchurch, creating more trauma for the people. people there.
Scholars and others may apply for an exemption from using prohibited content in limited contexts for research purposes.
Rupert Ablett-Hampson, New Zealand’s acting chief censor, said removing content like the one the Christchurch gunman posted did not completely stop the spread of racist statements or misinformation.
“What we can’t categorize is the potential misinformation and hate… ultimately behind these actions,” Ablett-Hampson said.
“We really need to look at tech companies so we can take some responsible action on online misinformation.”