Internet content (be it a news article, a song, a thirty-second clip, etc) is mostly paid for by advertising. While subscription models do indeed exist and there is a gradation to them1, they are still a minority. Terms like surveillance capitalism should surprise absolutely no one at this point. The classical example are social media users, who not only work to keep the machine running by providing content for free, but at the same time generate information, either directly or in more subtle, second-order ways, that is processed to provide advertisers with optimal matches, so that they can place their advertisements in front of interested customers.
The key here, from the businesses' point of view, is user engagement, or how much time a given user spends on a given site. Taking this to the analog (or formerly analog) world, remember that Trump is very good for news networks. Anything that keeps the largest amount of people glued to the screen for the longest period of time helps achieve the best possible outcome, at least according to some metrics. This is why YouTube is a rabbit hole and why the outrage economy works. No one will complain if a network is turned into a cesspool as long as the quarterly earning reports go in the correct direction.
Thus, by design, there is a lack of context in this system. User engagement is typically measured in time, but there are no emotional tags attached to that interaction window: a person was looking at our site for 5 minutes, but we do not know anything about the quality of those 5 minutes. Was the person reading articles that were making her be happier? Did she reach those articles because a friend hate-shared them, and is probably having a negative reaction? This is not important because it runs counter to the basic premise: more engagement leads to more ads being shown, so it has to be good. Hence, the Trump effect that we mentioned above, etc. Any other consideration is secondary2.
However, companies (and therefore advertisers) do care about the context3; there can be such a thing as bad publicity. We know that in the past ads have been pulled from YouTube videos due to concerns about certain content (see for instance, 1, 2), and the same goes for Facebook or Twitter. These are blanket bans because the granularity that typical ad management systems provide (allow or deny ad sales for certain keywords, geolocations, etc) does not allow to filter for the emotional aspect. Companies might still want to keep their advertisements up if they are shown along content that is not perceived as negative by the eyeballs being sold to the other side of the screen.
Right now, there is no way to signal our intentions when sharing a link. Every share is assumed to be neutral, but we know it is not. Plenty of traffic is driven by people that link to news articles, retweet tweets or share posts with negative comments attached to them; other users in their immediate social circle will see in a very similar light.
A few years back, blog spam was a problem4: automated
bots took over the comment systems of popular blogging software and kept posting
links to the websites of whoever was controlling them with the purpose of
altering their rank in search engines. After a while,
was proposed as a solution: it signaled search engines that they should not take
those links into account when computing the importance of the target pages.
After a time, most if not all blog engines automatically included this tag in
comment links. It was a technical solution, but also an economic one: it was
designed to lower the profitability of spammers' methods by not letting their
pages appear among the first search results. It did not end the
problem, but it was something.
The proposal I am presenting here works in a similar way. Users should be able to tag content that is being shared in a negative light, so that advertisers on the target page know that they would be placing their products in an adverse context; under these conditions, perhaps they would prefer not to bid for any ad at all. It is not technology, but business logic.
Why would someone use this? Let's take a look at the incentives here: users would use this to starve the beast, knowing that now they can link to media sites or articles they do not like with the expectation that visits coming from their links will not be monetized. Advertisers would be better off, as this would help them to better allocate resources. Content sites (for instance, newspapers) might take a look at the type of articles they are publishing. Lastly, social media sites would absolutely be delighted with the idea5, as it would help their stated purpose of bringing communities together in peace and love and harmony.
A lot would need to change for this to happen. Interfaces would need to change ("I am hate-sharing / mocking / joining the mob here"); browsers would need to recognize whatever tagging system is used to send the proper headers over to the target page; ad systems would have to recognize this and let their clients record their preferences when this tag is present; users would need to understand why this is an option and what effects it has, etc. I am not saying this would be an easy thing and I'm not offering any particular implementation details, mostly because I just have a high-level diagram in my head, but having the idea out there in written form cannot hurt.
Of course, a better solution would be for people to stop contributing to the outrage, but that would be asking them to stop being humans, so it is not even a starting point. This proposal offers a way out: you can still share that thing you want to mock, and at the same time you can send a signal that ad money spent on any resulting traffic would probably we wasted.
there is for instance the Financial Times, which is completely closed to non-subscribers; mini-subscription services such as Substack, which allow some content to be posted in the open; newspapers like the New York Times, which allow a certain number of articles to be read for free every month, etc. ↩
meaning that someone else will hopefully deal with the resulting mess. ↩
a more cynical approach (thanks, Milton Friedman) is that companies (should pretend to) care about what consumers think they should care about, if and only if that can potentially impact the bottom line. It would be the sister theory for Keynes' beauty contest. ↩
when blogs ruled the Earth. ↩