Strange IndiaStrange India


In the summer of 2017, three Wisconsin teenagers were killed in a high-speed car crash. At the time of the collision, the boys were recording their speed using Snapchat’s Speed Filter—123 miles per hour. This was not the first such incident: The same filter was linked to several other crashes between 2015 and 2017.

Parents of the Wisconsin teenagers sued Snapchat, claiming that its product, which awarded “trophies, streaks, and social recognition” to users who topped 100 miles per hour, was negligently designed to encourage dangerous high-speed driving. A lower court initially found that Section 230 of the Communications Decency Act immunized Snapchat from responsibility, claiming the app wasn’t liable for third-party content created by people using its Speed Filter. But in 2021 the Ninth Circuit reversed the lower court’s ruling.

Platforms are largely immune from being held liable for this kind of content due to Section 230. But, in this important case–Lemmon v. Snap–the Ninth Circuit made a critical distinction between a platform’s own harmful product design and its hosting of harmful third-party content. The argument wasn’t that Snapchat had created or hosted harmful content, but rather that it had negligently designed a feature, the Speed Filter, that incentivized dangerous behavior. The Ninth Circuit correctly found that the lower court erred in invoking Section 230 as a defense. It was the wrong legal instrument. Instead, the court turned its focus to Snapchat’s negligent design of the Speed Filter—a common product liability tort. 

Frustratingly, in the intervening years, and most recently in last month’s US Supreme Court oral arguments for Gonzalez v. Google, the courts have failed to understand or distinguish between harmful content and harmful design choices. Judges hearing these cases, and legislators working to rein in online abuses and harmful activity, must keep this distinction in mind and focus on platforms’ negligent product design rather than becoming distracted by broad claims of Section 230 immunity over harmful content.

At the heart of Gonzalez is the question of whether Section 230 protects YouTube not only when it hosts third-party content, but also when it makes targeted recommendations for what users should watch. Gonzalez’s attorney argued that YouTube should not receive Section 230 immunity for recommending videos, claiming that the act of curating and recommending what third-party material it displays is content creation in its own right. Google’s attorney retorted that its recommendation algorithm is neutral, treating all content it recommends to users in the same way. But these arguments miss the mark. There’s no need to invoke Section 230 at all in order to prevent the harms being considered in this case. It’s not that YouTube’s recommendation feature created new content, but that the “neutral” recommendation algorithms are negligently designed to not differentiate between, say, ISIS videos and cat videos. In fact, recommendations actively favor harmful and dangerous content.

Recommendation features like YouTube’s Watch Next and Recommended for You–which lie at the core of Gonzalez–materially contribute to harm because they prioritize outrageous and sensational material, and they encourage and monetarily reward users for creating such content. YouTube designed its recommendation features to increase user engagement and ad revenue. The creators of this system should have known that it would encourage and promote harmful behavior. 

Although most courts have accepted a sweeping interpretation of Section 230 that goes beyond just immunizing platforms from being responsible for dangerous third-party content, some judges have gone further and started to impose stricter scrutiny over negligent design by invoking product liability. In 2014, for example, Omegle, a video chat service that pairs random users, matched an 11-year-old girl with a 30-year-old man who would go on to groom and sexually abuse her for years. In 2022, the judge hearing this case, A.M. v. Omegle, found that Section 230 largely protected the actual material sent by both parties. But the platform was still liable for its negligent design choice to connect sexual predators with underaged victims. Just last week a similar case was filed against Grindr. A 19-year-old from Canada is suing the app because it connected him with adult men who raped him over a four-day period while he was a minor. Again, the lawsuit claims that Grindr was negligent in its age verification process and that it actively sought to have underage users join the app by targeting its advertising on TikTok to minors. These cases, like Lemmon v. Snap, affirm the importance of focusing on harmful product design features rather than harmful content.

These cases set a promising precedent for how to make platforms safer. When attempts to rein in online abuses focus on third-party content and Section 230, they become mired in thorny free-speech issues that make it hard to effect meaningful change. But if litigators, judges, and regulators side-step these content issues and instead focus on product liability, they will be getting at the root of the problem. Holding platforms accountable for negligent design choices that encourage and monetize the creation and proliferation of harmful content is the key to addressing many of the dangers that persist online.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here, and see our submission guidelines here. Submit an op-ed at opinion@wired.com.



Source link

By AUTHOR

Leave a Reply

Your email address will not be published. Required fields are marked *