On October 20, 2020, Congressman Tom Malinowski and Congresswoman Anna G. Eshoo proposed a revision of Section 230 of the Communications Decency Act (“CDA”) that would remove Section 230 immunity in cases where a platform amplifies certain kinds of user content and messages. As executives from major social media networks are set to testify before the House Energy and Commerce Committee today, this proposal is likely to play a significant role in setting the tone for some of the policy debates that occur before Congress around the responsibility platforms should or should not take for the behavior and speech of their users.
The proposed law, entitled the “Protecting Americans from Dangerous Algorithms Act” attempts to change two things about the current way in which Section 230 of the CDA functions.
First, the proposed law removes immunity for platforms that use algorithms to amplify user content that is directly relevant to cases involving acts of terrorism or the abridgement of civil rights. For example, if a platform employed an algorithm that amplified user content that encouraged a group of people to interfere with (i) the specific right of American citizens to vote or (ii) the more general right that Americans possess to equal protection of the laws, then the ordinarily applicable Section 230 immunity would be forfeited.
Second, the proposed amendment to Section 230 provides two exemptions to its own application:
- Provided that a platform makes the operation of its algorithm “obvious, understandable, and transparent to a reasonable user,” Section 230 immunity is still available to it. In other words, if a large technology platform tells users how it amplifies messages, that platform will be able to claim it is still entitled to Section 230 immunity.
- In cases where a platform provides “an algorithm, model, or other computational process” to support search features that users voluntarily opt to use, that platform is still entitled to Section 230 immunity.
It is worth noting that under a small business exception, the Protecting Americans from Dangerous Algorithms Act would not apply to smaller platforms, where “smaller” is defined as less than 50,000,000 monthly active users.