1 Comment

From the linked WIRED article:

> “Section 230 (c) (2) (b) is quite explicit about libraries, parents, and others having the ability to control obscene or other unwanted content on the internet,” says Zuckerman

>

Well indeed, here's my interpretation of [the section](https://www.law.cornell.edu/uscode/text/47/230#c_2_B):

> no [...] user of an interactive computer service shall be held liable on account of [...] any action taken to enable [...] the technical means to restrict access to material [provided by others].

>

That seems to map pretty cleanly to Unfollow Everything. The *any action* part is encouraging. I hope it works out!

I 100% agree that "middlewares" are a big deal. Somewhat related, the Rabbit R1 makers are working on a "Large Action Model", which is supposed to be able to use apps on the user’s behalf. I can imagine asking such a system "What are the latest updates from my family on Facebook?", and getting an answer, without seeing a single Meta ad or feed bloat. Perhaps this is the ultimate middleware. For them too, this lawsuit seems very relevant.

It is unfair how legal systems have applied unequally to big corporations versus small developers, when big corps take bad faith actions for financial gain, while Unfollow Everything & others take good faith actions to promote time well spent.

Big tech can scrape the entire internet without permission, including content from small players, and then present AI-generated answers derived from their work. The small players lose ad revenue, but only other big corporations like the NYT are able to stand up to this; so the abuse goes on. A small player reducing a minuscule amount of big tech ad traffic immediately leads to an existential C&D.

It would be awesome if the scales are balanced here.

Expand full comment