Only Elon Musk can heal the hellhole of hate he’s created on Twitter
Sane minds agree that Twitter is about to become even more of a dangerous, ill-monitored tinderbox of hate.
What they don’t agree on is what to do about it. Suspending the account of the former Kanye West — which Twitter CEO Elon Musk did Thursday for violating the social media platform’s prohibition against inciting violence — won’t end it. Ye is just the tip of the iceberg of hate.
Twice in the past two weeks, leading conservative figures — including GOP Rep. Marjorie Taylor Greene — have launched anti-LGBTQ, antisemitic Twitter attacks against San Francisco state Sen. Scott Wiener, who is gay and Jewish.
Greene called him a “Communist groomer.” Charlie Kirk, president of the right-wing Turning Point USA, falsely accused Wiener of backing legislation that would release “thousands of pedophiles” from jail. Less than 24 hours later, someone left a death threat on Wiener’s office voice mail, which the legislator said repeated one of Kirk’s falsehoods.
“People like you won’t be able to walk down the street when light comes to the darkness that you’re f—ing, you piece of s—,” the caller threatened Wiener on the voicemail.
Threats like these aren’t just about Wiener. They’re a preview of what life will be like — and is like — for others in marginalized communities if Musk follows through on his plan to reinstate other previously banned accounts.
Last week, the Department of Homeland Security issued a domestic terrorist advisory bulletin warning of a “persistent and lethal threat” to members of the LGBT, Jewish and migrant communities.
Unchecked social media plays a huge role in amplifying those threats. In the month after Florida passed its “Don’t Say Gay” law, which bans classroom discussion of sexual orientation or gender identity topics from kindergarten to third grade, false and reckless online use of “groomer” and similar rhetoric increased more than 400% online, according to an August study from the Human Rights Campaign and the Center for Countering Digital Hate.
Who pays attention to that garbage? State legislators. They proposed 340 anti-LGBTQ bills this year in state legislatures — including 140 anti-transgender pieces of legislation, according to the Human Rights Campaign.
Musk is tossing propane onto this fire by offering what he calls “general amnesty” to previously suspended accounts. Musk says he’s doing this in the name of “freedom.” But letting high-profile users spew hate isn’t freedom, and it has a trickle-down effect on the rest of us — especially after Musk canned half of Twitter, including much of its content moderation staff.
Now it seems like there’s more content moderation at your corner tavern. And unlike your local tap, Twitter misusers know they can say pretty much whatever they want to say without fear of getting tossed. Musk didn’t boot Ye until after he posted a swastika inside the Star of David.
“High-profile people are doing that with explicit understanding Musk’s Twitter isn’t going to moderate them,” Jillian York, director for international freedom of expression for the Electronic Frontier Foundation, told me. “It definitely emboldens other people.”
That emboldening is what has Wiener concerned. He’s worried that dropping the social media platform’s guardrails is “going to make Twitter a more dangerous place for people and provoke more violence.”
And not just against high-profile politicians like himself. We’ve seen how social media can influence deranged minds so many times in the past.
“People who walk into synagogues and shoot people or walk into a Black church and shoot people or walk into a gay nightclub and shoot people up — these people didn’t just randomly wake up one day and decide to do it,” Wiener said. “They were very often radicalized and brainwashed by what they’re reading on social media.”
So what can be done about it? That’s a harder question. First Amendment protections for speech are sacrosanct in this country.
Ideally, Musk would hand over the day-to-day operations of Twitter to a grown-up. But given how unlikely that seems, how about repeating Section 230 of the federal Communications Decency Act, the part that shields tech companies from liability for what their users post? Conservatives have wanted to repair it for years. Democrats, not so much.
That’s not the answer, online analysts said. Social media companies would cease to exist if they were held liable for everything uttered on their platform.
The Electronic Frontier Foundation’s York said Section 230 “is incredibly important. It is what allows all manner of expression on these platforms, and I don’t think that getting rid of 230 is going to make things better.”
So what about empowering the government to monitor these sites for hate and incendiary speech? Good luck with that, said Denver Riggleman, a former Republican congressman from Virginia.
After Riggleman was voted out of office in 2020, the former military and intelligence analyst served for several months on the Jan. 6 commission analyzing the call records, texts and other online activity of hundreds of people suspected of being involved in the insurrection on the US Capitol.
Riggleman believes “the federal government is not capable of tracking all of these different avenues or threat avenues in the digital ecosystem.”
“I’m not trying to scare people. I’m just saying they’re incapable,” Riggleman told me on The Chronicle’s “It’s All Political on Fifth and Mission” podcast last week. “They’re going to always be behind technology in the domestic space from those who can actually spread this. And the money is so big and the disinformation push is so large. I don’t know if I can emphasize enough how much money can be made on fantasy, ignorance and insanity.
“The information war is the new Forever War,” he said.
Kicking offenders off of a platform brings its own set of complications, said Riggleman, author of a new book about his experiences on the Jan. 6 commission called “The Breach.”
“If you (deplatform) someone that’s a central hub to communications,” Riggleman said, “they go to other encrypted apps or other areas and they can even further radicalize themselves. So you’ve got to make some really tough decisions.”
Wiener said it would be worth looking into giving the attorney general “more authority to sue these platforms when they are allowing instigation of violence. I’m not talking about holding them accountable for every little thing that’s false out on social media. I’m talking about the really serious stuff. The deeply, deeply harmful stuff.”
Putting government in charge of oversight of speech can also be problematic, York said. Then you’re at the whim of who is running the government — and as we know, that can change quickly. Today, a government led by Joe Biden would have the ultimate oversight of Twitter, but tomorrow it could be led by Donald Trump.
“This is not something where we want the government to step in,” York said. “There’s no reason to suggest that they would get that right.”
Some platforms rely on international human rights guidelines to help them decide what is acceptable speech. One of them is Twitter, York said. Those guidelines, as far as York can tell, are still in place. Just the person in charge has changed.
“You have to you have to trust whoever’s in charge when it comes to hate speech. And that can be troubling,” York said. “There’s a lot of proposals on how we deal with this universally. I think that there have been some interesting conversations in the past few years among international civil society, but I’m not sure that anybody has a good answer.”
Which means for now, we’re stuck with Musk guarding the Twitter jail. Until somebody has a better plan.
Joe Garofoli is The San Francisco Chronicle’s senior political writer. Email: firstname.lastname@example.org Twitter: @joegarofoli
Leave a Comment