And we can’t settings-toggle our way out
Tim Bray is somebody one can reasonably look up to with regard to a lot of things. He recently moved from Twitter to Mastodon and I and 3.9 thousand others are happy to have him there.
He posted – linked above – about how Mastodon does its algorithmic feed-building a bit differently, how the users think about it differently, and how Mastodon could allow a new user-choice-centered approach. I had feelings and thoughts spurred by the post (as well as by other Discourse of the moment, as will probably be evident between the lines). Go read it first!
Do engagement-maximizing algorithms maximize engagement by giving us what we want?
Protect me from what I want ·
And anyhow, those algorithms are just showing you what you want. Don’t try to deny it, if it wasn’t what you wanted you
wouldn’t be doomscrolling so much, would you? These ML models know what you want and that’s what they show you.
(Jenny Holzer is wonderful. On…
I think this is presented in a tongue in cheek way, but I want to unpack it a bit for reasons that will become clear.
Man can do what he wills but he cannot will what he wills.
– Schopenhauer, I’m told
I do not want to eat an entire loaf of Oregon Hazelnut bread in an approximately two-day-long period. If I buy the loaf and have it in my house, I will do this. Therefore, I do not buy the loaf.
Should my ephemeral will to eat be considered as more authentic or real than my longer-term will to not eat? The latter is capable of overruling the former by not buying the bread. The former certainly wins when the bread’s around. Clearly “what I want” is both to eat the bread and to not eat the bread. It’s not that one is a “want” and one is a “meta-want”, because both straightforwardly inform my actions to bring about their intended results – it’s just that those actions oppose each other by being enacted at different timescales.
This isn’t binary, either; if we could talk about “what I want” simply at a meaningful level, I think you wouldn’t expect it to be helpful for me to put Franz pumpkin bread in a cabinet too high for easy perusal. But when I do, I eat less pumpkin bread. Is it that I want to eat the bread and am stymied by reaching up? Is it that I want to not eat the bread and am protected from its siren call by cabinet doors? We can pretend to model it with quantities, “activation energy”, “speed bumps”, but I think we all know we’re discussing a map, not the territory.
Volition isn’t simple enough to use this language. To apply something akin to systems thinking here, we can recognize that what I “want” and what my environment cues me toward also interact at a higher-level: what I want my environment to cue me toward. Whether I have the bread in the cupboard is the level at which my short-term and longer-term decisions interact.
That’s the level at which people are having this discussion - so it is flattening to pull it back down to a level equivalent to “well if you didn’t want to eat so much bread then why did you eat it all when it was there hmm”.
So – what? Who cares? Well, I’m particularly touchy about how we talk about volition because it is not separable from how we discuss actual addiction (no, not my problematic relationship with bread). There are some incorrect but deep-seated puritanical beliefs about how self-control works that show up in how we talk about addiction: “you don’t really want to quit, or you would have quit” is the kind of thing that I have heard pretty commonly voiced with regard to alcohol, with regard to drugs, with regard to smoking… but predominantly voiced by people who haven’t gone through it and haven’t seen a loved one going through it.
Oh, and all that is premised on choices that are pretty easy to notice yourself making! It’s easy to drift into niches on TikTok for quite some time before you even notice the pattern you’re being shown. It’s easy to make choices without awareness of their impact. The poisonous Bradford lozenges were tasty enough to be sold by someone who’d had one.
If we are to describe the most fleeting want as the most real, and the conscious repudiation of it as inauthentic, less true, a mere “want to want”, then we arrive at some silly places when it comes to how entirely non-suicidal people experience “l’appel du vide”.
Is my least aware, most instinct-driven self my truest self? When I went on antidepressants that let me wrest control back from that self, was I denying who I was?
You can tell I have pretty strong feelings about this stuff because I’m writing all this in response to a paragraph that, to be clear, seems like it was presented with irony. But… oof. When former Twitter employees write about how they’re confident people preferred non-chronological feed-building algorithms because that’s what the data showed – I mean, what would they have been inferring from data about me and bread?
Can the debate over what feed-building algorithms should be like be resolved by giving individuals choices?
…plausible, but very difficult.
Mastodon introduces a feature where you can download and install algorithms, which can be posted by anyone. They are
given the raw unsorted list of posts from people you follow and use that produce a coherent feed. You might have to pay for
them. They could be free. They could involve elaborate ML, or not. They might sometimes pull in posts from people you don’t
follow. They could be open-source, or not.
I like this idea a lot, although the technology would require careful design. The algorithm would have to be a…
Look, it’s not that it’s implied by the above, but historically, I think it’d be fair to say tech folks as a whole have under-considered the externalities of individual choices. The mindset has been approximately: if it’s a setting an individual can toggle, surely it’s only that individual’s business? We can see this showing up in the replies to Tim Bray’s original Mastodon post: “not only “own your own data”, but own the way you view it too”, “people who are against algorithms can just turn them off in their settings”.
If I were going to phrase this as a Take, it might be this: people who have technical skills enough to build and influence stuff should not believe this. In a world with Pizzagate and genocide in Myanmar and eight million other awful things the Internet has enflamed just since I graduated high school, innocent optimism is irresponsible; we have to own what our systems incentivize. So I want to take a tediously explicit look at how even individually-selected transparent feed-building algorithms have knock-on effects in a non read-only world.
Non read-only? Well, to compare: if we were to ignore the fact that I linkblog from the stuff I find there, the way that I sort the stuff in my RSS reader probably shouldn’t matter to anyone but me.
But in an actually social medium, we’re all in the soup together.
- If other people are using feed-building algorithms that weight “likes” into the prominence of content, then I can no longer hit like on a post without that also constituting an act of signal-boosting, which may direct unwanted attention to a person to whom I just wanted to give acknowledgement. My use of this feature now has to shape itself around other people’s algorithmic choice.
- Since we know there are people who enjoy being assholes on the internet recreationally, if they are empowered to more easily use feed-building algorithms other than reverse chronological sort, it follows they will find algorithms better suited for the kind of griefing they want to do. (The classic Twitter “ratio” makes clear that even very simple signals are enough to find an ongoing dogpile if that’s your goal, and there are already controversial Fediverse forks introducing features some believe to be abusive, so this prediction isn’t a stretch.) Even if I don’t use those algorithms myself, that’s going to impact the number of turds in the pool that I was planning to swim in.
- If the surface area of algorithm-to-understand becomes as complicated as “anything anyone might build and plug in”, this will encourage the kind of paranoia that one sees in Tiktok word-substitutions for purportedly downranked keywords. You will get a lot more superstitious pigeons as different people try to meet their complex goals of social media participation without a full understanding of the landscape. Listening to artists on Instagram complain and theorize about how to maintain reach across various incarnations of that feed-building algorithm has led me to believe that chaos/complexity in this domain has real costs, ones that are disproportionate to what the technically-minded might assume. (How much money gets spent on scammy “beat the algorithm” Instagram/YouTube courses every day? The purchasers are doubly victims of the scammers and the algorithmic world too complicated for them to make sense of.)
- Right now the default reverse chronological sort produces a neat decay in engagement with posts. It takes real social effort to keep something in circulation. This serves usefully to limit the spread of some kinds of take-y discourse. We only get to have this conversational atmosphere through alignment, through sharing a quieter lower-engagement view on the world. If a lot of other individuals – even well-meaning ones – are picking Thunderdome feed-building algorithms, I might be able to keep my reverse chronological algorithm, but the universe of posts that are out there for me to sort reverse chronologically are going to be far, far more Thunderdomey. If the people I follow are tempted by the junk food Thunderdome-style engagement numbers, the current conversational atmosphere I get to have with them will dissipate as they’re nudged toward Takes. Maybe I can invest ever-increasing amounts of effort in filtering out the nonsense, I can walk away entirely – but if a market is all about incentives, where is the cost of this accounted for?
Individualizing stuff like this feels like freedom, especially to those of us with the skillset to reach our hands into the guts and tinker. There’s something to that, certainly. I spend far too much time writing user scripts to not see the romance. But I can’t help thinking of how we’re struggling to de-individualize issues like plastic pollution where we didn’t take adequately into consideration how my “individual choice” is constrained by others’ in aggregate, and how my choice in turn impacts others. There are huge collective action problems in clawing back the messes free marketplaces create where externalities are high, and I think that’s a really important lesson we should take from the past couple decades of social web.
You can see how people’s perceived “freedoms” bump up against each other: my freedom to “like” without ramifications, your freedom to use the public “like” data to some end. Etiquette encompasses ethics and politics. I’m sure many people I respect hold different beliefs about how those conflicts should be resolved. The only thing that I really, really, really hope we’re all explicit about, is that pushing “control to the edge” does not sidestep the impact and cannot be seen as neutral.
Here are a couple things from the archives that seem real relevant:
- The death of the newsfeed (and its afterlife) – I read a piece on social media written by a venture capitalist (I know, I know) and it gave me a lot of thoughts.
- No, the Facebook feed algorithm is still bad – The expressed connections people make on social media are made in a context of expected result. The expectations are a product of built-up experience with social practice and with the feed-building algorithm. Yanking around the algorithm as a blind experiment is bad and unethical – and it’s also not representative of what people’s experiences with it would be with signaling and time to adjust.