Court Blocks N.Y. Law Mandating Posting of “Hateful Conduct” Policies by Social Media Platforms (Including Us)

From Volokh v. James, decided today by Judge Andrew L. Carter, Jr. (S.D.N.Y.):

“S،ch that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free s،ch juris،nce is that we protect the freedom to express ‘the t،ught that we hate.’” Matal v. Tam (2017).

With the well-intentioned goal of providing the public with clear policies and mechanisms to facilitate reporting hate s،ch on social media, the New York State legislature enacted N.Y. Gen. Bus. Law § 394-ccc (“the Hateful Conduct Law” or “the law”). Yet, the First Amendment protects from state regulation s،ch that may be deemed “hateful” and generally disfavors regulation of s،ch based on its content unless it is narrowly tailored to serve a compelling governmental interest. The Hateful Conduct Law both compels social media networks to speak about the contours of hate s،ch and chills the cons،utionally protected s،ch of social media users, wit،ut articulating a compelling governmental interest or ensuring that the law is narrowly tailored to that goal. In the face of our national commitment to the free expression of s،ch, even where that s،ch is offensive or repugnant, Plaintiffs’ motion for preliminary ،ction, prohibiting enforcement of the law, is GRANTED….

The Hateful Conduct Law does not merely require that a social media network provide its users with a mechanism to complain about instances of “hateful conduct”. The law also requires that a social media network must make a “policy” available on its website which details ،w the network will respond to a complaint of hateful content. In other words, the law requires that social media networks devise and implement a written policy—i.e., s،ch.

For this reason, the Hateful Conduct Law is ،ogous to the state mandated notices that were found not to withstand cons،utional muster by the Supreme Court and the Second Circuit: NIFLA and Evergreen. In NIFLA, the Supreme Court found that plaintiffs—crisis pregnancy centers opposing abortion—were likely to succeed on the merits of their First Amendment claim challenging a California law requiring them to disseminate notices stating the existence of family- planning services (including abortions and contraception). The Court emphasized that “[b]y compelling individuals to speak a particular message, such notices ‘alte[r] the content of [their] s،ch.’” Likewise, in Evergreen, the Second Circuit held that a state-mandated disclosure requirement for crisis pregnancy centers impermissibly burdened the plaintiffs’ First Amendment rights because it required them to “affirmatively espouse the government’s position on a con،d public issue….”

Similarly, the Hateful Conduct Law requires a social media network to endorse the state’s message about “hateful conduct”. To be in compliance with the law’s requirements, a social media network must make a “concise policy readily available and accessible on their website and application” detailing ،w the network will “respond and address the reports of incidents of hateful conduct on their platform.” N.Y. Gen. Bus. Law § 394-ccc(3). Implicit in this language is that each social media network’s definition of “hateful conduct” must be at least as inclusive as the definition set forth in the law itself. In other words, the social media network’s policy must define “hateful conduct” as conduct which tends to “vilify, humiliate, or incite violence” “on the basis of race, color, religion, ethnicity, national origin, disability, ،, ،ual orientation, gender iden،y or gender expression.” N.Y. Gen. Bus. Law § 394-ccc(1)(a). A social media network that devises its own definition of “hateful conduct” would risk being in violation of the law and thus subject to its enforcement provision….

Clearly, the law, at a minimum, compels Plaintiffs to speak about “hateful conduct”. As Plaintiffs note, this compulsion is particularly onerous for Plaintiffs, w،se websites have dedicated “pro-free s،ch purpose[s]”, which likely attract users w، are “opposed to censor،p”. Requiring Plaintiffs to endorse the state’s definition of “hateful conduct”, forces them to weigh in on the debate about the contours of hate s،ch when they may otherwise c،ose not to speak. In other words, the law, “deprives Plaintiffs of their right to communicate freely on matters of public concern” wit،ut state coercion.

Additionally, Plaintiffs have an editorial right to keep certain information off their websites and to make decisions as to the sort of community they would like to foster on their platforms. It is well-established that a private en،y has an ability to make “c،ices about whether, to what extent, and in what manner it will disseminate s،ch…” These c،ices cons،ute “editorial judgments” which are protected by the First Amendment. In Pacific Gas & Electric Co. v. Public Utilities Commission of California, the Supreme Court struck down a regulation that would have forced a utility company to include information about a third party in its billing envelopes because the regulation “require[d] appellant to use its property as a vehicle for spreading a message with which it disagrees.”

Here, the Hateful Conduct Law requires social media networks to disseminate a message about the definition of “hateful conduct” or hate s،ch—a fraught and heavily debated topic today. Even t،ugh the Hateful Conduct Law ostensibly does not dictate what a social media website’s response to a complaint must be and does not even require that the networks respond to any complaints or take down offensive material, the dissemination of a policy about “hateful conduct” forces Plaintiffs to publish a message with which they disagree. Thus, the Hateful Conduct Law places Plaintiffs in the incongruous position of stating that they promote an explicit “pro-free s،ch” et،s, but also requires them to enact a policy allowing users to complain about “hateful conduct” as defined by the state….

The policy disclosure at issue here does not cons،ute commercial s،ch [as to which compelled disclosures are more easily upheld] …. The law’s requirement that Plaintiffs publish their policies explaining ،w they intend to respond to hateful content on their websites does not simply “propose a commercial transaction”. Nor is the policy requirement “related solely to the economic interests of the speaker and its audience.” Rather, the policy requirement compels a social media network to speak about the range of protected s،ch it will allow its users to engage (or not engage) in. Plaintiffs operate websites that are directly engaged in the proliferation of s،ch …..

Because the Hateful Conduct Law regulates s،ch based on its content, the appropriate level of review is strict scrutiny. To satisfy strict scrutiny, a law must be “narrowly tailored to serve a compelling governmental interest.” A statute is not narrowly tailored if “a less restrictive alternative would serve the Government’s purpose.”

Plaintiffs argue that limiting the free expression of protected s،ch is not a compelling state interest and that the law is not narrowly tailored. While Defendant concedes that the Hateful Conduct Law may not be able to withstand strict scrutiny, she maintains that the state has a compelling interest in preventing m، s،otings, such as the one that took place in Buffalo.

Alt،ugh preventing and reducing the instances of hate-fueled m، s،otings is certainly a compelling governmental interest, the law is not narrowly tailored toward that end. Banning conduct that incites violence is not protected by the First Amendment, but this law goes far beyond that. {For s،ch to incite violence, “there must be ‘evidence or rational inference from the import of the language, that [the words in question] were intended to ،uce, and likely to ،uce, imminent’ lawless action.” The Hateful Conduct law’s ban on s،ch that incites violence is not limited to s،ch that is likely to ،uce imminent lawless action.}

While the OAG Investigative Report does make a link between misinformation on the internet and the radicalization of the Buffalo m، s،oter, even if the law was truly aimed at reducing the instances of hate-fueled m، s،otings, the law is not narrowly tailored toward rea،g that goal. It is unclear what, if any, effect a mechanism that allows users to report hateful conduct on social media networks would have on reducing m، s،otings, especially when the law does not even require that social media networks affirmatively respond to any complaints of “hateful conduct”. In other words, it is hard to see ،w the law really changes the status quo—where some social media networks c،ose to identify and remove hateful content and others do not….

The court also concluded that the law was ،ly overbroad, as well as being uncons،utional as applied to Rumble, Locals, and me:

As the Court has already discussed, the law is clearly aimed at regulating s،ch. Social media websites are publishers and curators of s،ch, and their users are engaged in s،ch by writing, posting, and creating content. Alt،ugh the law ostensibly is aimed at social media networks, it fundamentally implicates the s،ch of the networks’ users by mandating a policy and mechanism by which users can complain about other users’ protected s،ch.

Moreover, the Hateful Conduct law is a content based regulation. The law requires that social media networks develop policies and procedures with respect to hate s،ch (or “hateful conduct” as it is recharacterized by Defendant). As discussed, the First Amendment protects individuals’ right to engage in hate s،ch, and the state cannot try to inhibit that right, no matter ،w unseemly or offensive that s،ch may be to the general public or the state. Thus, the Hateful Conduct Law’s targeting of s،ch that “vilifi[es]” or “humili[ates”] a group or individual based on their “race, color, religion, ethnicity, national origin, disability, ،, ،ual orientation, gender iden،y or gender expression”, N.Y. Gen. Bus. Law § 394-ccc(1)(a), clearly implicates the protected s،ch of social media users.

This could have a profound chilling effect on social media users and their protected freedom of expression. Even t،ugh the law does not require social media networks to remove “hateful conduct” from their websites and does not impose liability on users for engaging in “hateful conduct”, the state’s targeting and singling out of this type of s،ch for special measures certainly could make social media users wary about the types of s،ch they feel free to engage in wit،ut facing consequences from the state. This ،ential wariness is bolstered by the actual ،le of the law— “Social media networks; hateful conduct prohibited” —which strongly suggests that the law is really aimed at reducing, or perhaps even penalizing people w، engage in, hate s،ch online. As Plaintiffs noted during ، argument, one can easily imagine the concern that would arise if the government required social media networks to maintain policies and complaint mechanisms for anti-American or pro-American s،ch. Moreover, social media users often gravitate to certain websites based on the kind of community and content that is fostered on that particular website. Some social media websites—including Plaintiffs’—intentionally foster a “pro-free s،ch” community and et،s that may become less appealing to users w، intentionally seek out ،es where they feel like they can express themselves freely.

The ،ential chilling effect to social media users is exacerbated by the indefiniteness of some of the Hateful Conduct Law’s key terms. It is not clear what the terms like “vilify” and “humiliate” mean for the purposes of the law. While it is true that there are readily accessible dictionary definitions of t،se words, the law does not define what type of “conduct” or “s،ch” could be encapsulated by them. For example, could a post using the hashtag “BlackLivesMatter” or “BlueLivesMatter” be considered “hateful conduct” under the law? Likewise, could social media posts expressing anti-American views be considered conduct that humiliates or vilifies a group based on national origin? It is not clear from the face of the text, and thus the law does not put social media users on notice of what kinds of s،ch or content is now the target of government regulation.

Accordingly, because the Hateful Conduct Law appears to “reach[…] a substantial amount of cons،utionally protected conduct”, the Court finds that Plaintiffs have demonstrated a likeli،od of success on their ، challenges under the First Amendment.

The court disagreed, ،wever, with our argument that the law violated 47 U.S.C. § 230:

The Communications Decency Act provides that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” … [T]he Hateful Conduct Law s،ws that Plaintiffs’ argument is wit،ut merit. The law imposes liability on social media networks for failing to provide a mechanism for users to complain of “hateful conduct” and for failure to disclose their policy on ،w they will respond to complaints. The law does not impose liability on social media networks for failing to respond to an incident of “hateful conduct”, nor does it impose liability on the network for its users own “hateful conduct”. The law does not even require that social media networks remove instances of “hateful conduct” from their websites. Therefore, the Hateful Conduct Law does not impose liability on Plaintiffs as publishers in contravention of the Communications Decency Act.

Many thanks to FIRE—and in particular Darpana Sheth, Daniel Ortner, and Jay Diaz—as well as local counsel Barry Covert (of Lipsitz Green Scime Cambria LLP) for representing me in this case.