On first reading, the text of the bill might seem to be offering some level of protection. For example, here’s what it says about the kind of things that social media can remove. Platforms can take down or edit material that is:
“the subject of a referral or request from an organization with the purpose of preventing the sexual exploitation of children and protecting survivors of sexual abuse from ongoing harassment; directly incites criminal activity or consists of specific threats of violence targeted against a person or group because of their race, color, disability, religion, national origin or ancestry, age, sex, or status as a peace officer or judge; or is unlawful expression.”
That long list at the end of this passage—including color, disability, sex, etc.—might seem as if it’s offering the kind of protections usually afforded when platforms take down hate speech. But look again. All of those other words are just window dressing. The bill actually allows sites to remove such speech only if it “consists of specific threats of violence.” This is the very narrowest definition of incitement to violence. It’s the kind of very narrow requirement that has protected both KKK leaders and Tucker Carlson when calling for violence or other harmful acts against groups, without making a specific threat,
By prohibiting social media platforms from removing text that doesn’t feature a specific threat, they have created a “must carry” situation, one in which the social media platforms that fit their definition (which seems to be Facebook, Twitter, YouTube, Instagram, TikTok, Pinterest, and Snapchat, but could expand to Google, Apple, and others thanks to some broad language) can not remove hate speech or disinformation, no matter how malignant.
To see how intentional this result is takes no more than looking at the amendments that were rejected.
- Here’s one that would have allowed sites to take down posts that promoted “any international or domestic terrorist group or any international or domestic terrorist acts.”
That amendment was rejected.
- Here’s another that would have at least allowed sites to take down a post that “includes the denial of the Holocaust.”
That amendment was rejected.
- Here’s a third that would have allowed sites to remove information that “promotes or supports vaccine misinformation.”
Of course that amendment was rejected.
Seriously. Texas just passed a law (and Abbott just signed it) which prohibits social media sites from removing hate speech, or posts that promote terrorism, or intentional misinformation about vaccines, or holocaust denial.
And it doesn’t stop there. Because Texas doesn’t just require that sites leave these posts intact: the state also prohibits platforms from “censoring” these posts in any way. That includes “demonetize, de-boost, restrict, deny equal access or visibility to …” That requirement means that not only do sites have to carry a post, no matter how vile, they have to promote it and pay for it equally with other posts.
So, if someone in Texas were to post a YouTube video that was full of holocaust denial, revived every antisemitic claim in history, and called for driving Jews out of the country and burning down synagogues—but didn’t mention a specific time and place for people to gather with torches—YouTube would not only be forbidden from removing it, they wouldn’t be allowed to add any warning, would have to promote it equally with other videos, and would have to pay the creator if it got enough racists to watch.
As the tech industry group Chamber for Progress puts it: “This law is going to put more hate speech, scams, terrorist content, and misinformation online.”
Naturally, platforms and organizations have already announced lawsuits, mostly focused on the idea that the Texas law redefines social media platforms as “common carriers.” It’s unlikely that any of these platforms will ever be bound by this law.
Even so … it gives great insight into the type of speech Republicans are really out to promote.