Seeing the updates from https://frame.work/, I'm so happy I spent money on this machine. I won't be buying any new hardware (yet, I'm thinking of doubling my RAM) but the fact that we finally have an ARM offering, a matte screen and newer board for Intel? Pretty dope. I do want to consider getting my new home PC to be within the chassis they're suggested. Only if I need it — I don't right now.

choosing algorithms: we live in a society

https://maya.land/assets/me.gif

And we can’t settings-toggle our way out



Tim Bray is somebody one can reasonably look up to with regard to a lot of things. He recently moved from Twitter to Mastodon and I and 3.9 thousand others are happy to have him there.



He posted – linked above – about how Mastodon does its algorithmic feed-building a bit differently, how the users think about it differently, and how Mastodon could allow a new user-choice-centered approach. I had feelings and thoughts spurred by the post (as well as by other Discourse of the moment, as will probably be evident between the lines). Go read it first!



Do engagement-maximizing algorithms maximize engagement by giving us what we want?




Protect me from what I want ·
And anyhow, those algorithms are just showing you what you want. Don’t try to deny it, if it wasn’t what you wanted you
wouldn’t be doomscrolling so much, would you? These ML models know what you want and that’s what they show you.

(Jenny Holzer is wonderful. On…




I think this is presented in a tongue in cheek way, but I want to unpack it a bit for reasons that will become clear.




Man can do what he wills but he cannot will what he wills.




Schopenhauer, I’m told



I do not want to eat an entire loaf of Oregon Hazelnut bread in an approximately two-day-long period. If I buy the loaf and have it in my house, I will do this. Therefore, I do not buy the loaf.



Should my ephemeral will to eat be considered as more authentic or real than my longer-term will to not eat? The latter is capable of overruling the former by not buying the bread. The former certainly wins when the bread’s around. Clearly “what I want” is both to eat the bread and to not eat the bread. It’s not that one is a “want” and one is a “meta-want”, because both straightforwardly inform my actions to bring about their intended results – it’s just that those actions oppose each other by being enacted at different timescales.



This isn’t binary, either; if we could talk about “what I want” simply at a meaningful level, I think you wouldn’t expect it to be helpful for me to put Franz pumpkin bread in a cabinet too high for easy perusal. But when I do, I eat less pumpkin bread. Is it that I want to eat the bread and am stymied by reaching up? Is it that I want to not eat the bread and am protected from its siren call by cabinet doors? We can pretend to model it with quantities, “activation energy”, “speed bumps”, but I think we all know we’re discussing a map, not the territory.



Volition isn’t simple enough to use this language. To apply something akin to systems thinking here, we can recognize that what I “want” and what my environment cues me toward also interact at a higher-level: what I want my environment to cue me toward. Whether I have the bread in the cupboard is the level at which my short-term and longer-term decisions interact.
That’s the level at which people are having this discussion - so it is flattening to pull it back down to a level equivalent to “well if you didn’t want to eat so much bread then why did you eat it all when it was there hmm”.



So – what? Who cares? Well, I’m particularly touchy about how we talk about volition because it is not separable from how we discuss actual addiction (no, not my problematic relationship with bread). There are some incorrect but deep-seated puritanical beliefs about how self-control works that show up in how we talk about addiction: “you don’t really want to quit, or you would have quit” is the kind of thing that I have heard pretty commonly voiced with regard to alcohol, with regard to drugs, with regard to smoking… but predominantly voiced by people who haven’t gone through it and haven’t seen a loved one going through it1.



Oh, and all that is premised on choices that are pretty easy to notice yourself making! It’s easy to drift into niches on TikTok for quite some time before you even notice the pattern you’re being shown. It’s easy to make choices without awareness of their impact. The poisonous Bradford lozenges were tasty enough to be sold by someone who’d had one.



If we are to describe the most fleeting want as the most real, and the conscious repudiation of it as inauthentic, less true, a mere “want to want”, then we arrive at some silly places when it comes to how entirely non-suicidal people experience “l’appel du vide”.



Is my least aware, most instinct-driven self my truest self? When I went on antidepressants that let me wrest control back from that self, was I denying who I was?



You can tell I have pretty strong feelings about this stuff because I’m writing all this in response to a paragraph that, to be clear, seems like it was presented with irony. But… oof. When former Twitter employees write about how they’re confident people preferred non-chronological feed-building algorithms because that’s what the data showed – I mean, what would they have been inferring from data about me and bread?



Can the debate over what feed-building algorithms should be like be resolved by giving individuals choices?




…plausible, but very difficult.
Mastodon introduces a feature where you can download and install algorithms, which can be posted by anyone. They are
given the raw unsorted list of posts from people you follow and use that produce a coherent feed. You might have to pay for
them. They could be free. They could involve elaborate ML, or not. They might sometimes pull in posts from people you don’t
follow. They could be open-source, or not.
I like this idea a lot, although the technology would require careful design.
The algorithm would have to be a…




Look, it’s not that it’s implied by the above, but historically, I think it’d be fair to say tech folks as a whole have under-considered the externalities of individual choices. The mindset has been approximately: if it’s a setting an individual can toggle, surely it’s only that individual’s business? We can see this showing up in the replies to Tim Bray’s original Mastodon post: “not only “own your own data”, but own the way you view it too”, “people who are against algorithms can just turn them off in their settings”.



If I were going to phrase this as a Take, it might be this: people who have technical skills enough to build and influence stuff should not believe this. In a world with Pizzagate and genocide in Myanmar and eight million other awful things the Internet has enflamed just since I graduated high school, innocent optimism is irresponsible; we have to own what our systems incentivize. So I want to take a tediously explicit look at how even individually-selected transparent feed-building algorithms have knock-on effects in a non read-only world.



Non read-only? Well, to compare: if we were to ignore the fact that I linkblog from the stuff I find there, the way that I sort the stuff in my RSS reader probably shouldn’t matter to anyone but me.



But in an actually social medium, we’re all in the soup together.




  • If other people are using feed-building algorithms that weight “likes” into the prominence of content, then I can no longer hit like on a post without that also constituting an act of signal-boosting, which may direct unwanted attention to a person to whom I just wanted to give acknowledgement2. My use of this feature now has to shape itself around other people’s algorithmic choice.

  • Since we know there are people who enjoy being assholes on the internet recreationally, if they are empowered to more easily use feed-building algorithms other than reverse chronological sort, it follows they will find algorithms better suited for the kind of griefing they want to do. (The classic Twitter “ratio” makes clear that even very simple signals are enough to find an ongoing dogpile if that’s your goal, and there are already controversial Fediverse forks introducing features some believe to be abusive, so this prediction isn’t a stretch.) Even if I don’t use those algorithms myself, that’s going to impact the number of turds in the pool that I was planning to swim in.

  • If the surface area of algorithm-to-understand becomes as complicated as “anything anyone might build and plug in”, this will encourage the kind of paranoia that one sees in Tiktok word-substitutions for purportedly downranked keywords. You will get a lot more superstitious pigeons as different people try to meet their complex goals of social media participation without a full understanding of the landscape. Listening to artists on Instagram complain and theorize about how to maintain reach across various incarnations of that feed-building algorithm has led me to believe that chaos/complexity in this domain has real costs, ones that are disproportionate to what the technically-minded might assume. (How much money gets spent on scammy “beat the algorithm” Instagram/YouTube courses every day? The purchasers are doubly victims of the scammers and the algorithmic world too complicated for them to make sense of.)

  • Right now the default reverse chronological sort produces a neat decay in engagement with posts. It takes real social effort to keep something in circulation. This serves usefully to limit the spread of some kinds of take-y discourse. We only get to have this conversational atmosphere through alignment, through sharing a quieter lower-engagement view on the world. If a lot of other individuals – even well-meaning ones – are picking Thunderdome feed-building algorithms, I might be able to keep my reverse chronological algorithm, but the universe of posts that are out there for me to sort reverse chronologically are going to be far, far more Thunderdomey. If the people I follow are tempted by the junk food Thunderdome-style engagement numbers, the current conversational atmosphere I get to have with them will dissipate as they’re nudged toward Takes. Maybe I can invest ever-increasing amounts of effort in filtering out the nonsense, I can walk away entirely – but if a market is all about incentives, where is the cost of this accounted for?



Individualizing stuff like this feels like freedom, especially to those of us with the skillset to reach our hands into the guts and tinker. There’s something to that, certainly. I spend far too much time writing user scripts to not see the romance. But I can’t help thinking of how we’re struggling to de-individualize issues like plastic pollution where we didn’t take adequately into consideration how my “individual choice” is constrained by others’ in aggregate, and how my choice in turn impacts others. There are huge collective action problems in clawing back the messes free marketplaces create where externalities are high, and I think that’s a really important lesson we should take from the past couple decades of social web.



You can see how people’s perceived “freedoms” bump up against each other: my freedom to “like” without ramifications, your freedom to use the public “like” data to some end. Etiquette encompasses ethics and politics. I’m sure many people I respect hold different beliefs about how those conflicts should be resolved. The only thing that I really, really, really hope we’re all explicit about, is that pushing “control to the edge” does not sidestep the impact and cannot be seen as neutral.



Conclusion



joker "we live in a society" gif



If you are interested in more of Maya’s Thoughts On Social Media Algorithms™️



Here are a couple things from the archives that seem real relevant:




  • The death of the newsfeed (and its afterlife) – I read a piece on social media written by a venture capitalist (I know, I know) and it gave me a lot of thoughts3.

  • No, the Facebook feed algorithm is still bad – The expressed connections people make on social media are made in a context of expected result. The expectations are a product of built-up experience with social practice and with the feed-building algorithm. Yanking around the algorithm as a blind experiment is bad and unethical – and it’s also not representative of what people’s experiences with it would be with signaling and time to adjust.









  1. Further reading: buprenorphine injections for opioid addiction, buproprion for smokers: these can’t have their efficacy by manufacturing virtuous desires, or no one would take them in the first place. 




  2. This is already a problem with “trending posts”. On the other hand, at least they ignore unlisted content – but on the third hand, it’s already a pain to have to suss out whether my posts should be public, unlisted, or followers-only… 




  3. You may perceive an inconsistency between some of my doom and gloom here and musing there. There, I note that I actually liked the Twitter feature that shoved “liked” posts into my feed, that if I could see it separately on an opt-in basis that could be nice/cool. Here, I note that the social context of the like feature on Mastodon relies on its current non-use. This isn’t strictly contradictory – but one thing I’ll note is that on Mastodon, I hide retoots4 in my timeline by default, and tab over to a feed where they’re enabled, and this gives me pretty much the experience that I was hoping for in that piece. (Except, of course, that because other people don’t all do this, I have to be more parsimonious with my retooting to not spam their main feeds. A society, I say!) 




  4. If Eugen is going to try to take “toot” away from us, I must double-down on its use. (Think “retweet” from Twitter, or “boost” in the standard Mastodon UI.) 




byMaya • posted archived copycurrent

If you're in a position where you can determine if a union contract can pass (an executive or close to such, like an SVP) and you're actively against it internally, say that externally. Let the people who you're lying to on a daily basis that you, in fact, do not value them and see them as nothing more than an "expensive line item". Retract any mentions of caring about people from your company's messaging because internally, it's a farce.

If you allegedly trust the skill and minds of your workers, then have enough respect to allow them to be fully recognized in the procedure of every day operations, not as some "asset" but an authority over what they do — that's why you hired them.

Read https://about.sourcegraph.com/blog/cheating-is-all-you-need and this is a decent intro to what LLMs are and how they can be used (from chat assistants to automation levers) and it does constantly shock me that the more people entertain this, they don't realize how they're opening a hatch to the bottom of open roles that'll weaken (and move) lower income roles and positions that are conventionally deemed as "creative" to abuse. It's a toy to people who can't relate beyond themselves, and a flag of fear to those who understand how the history of automation has led more people to more abusive situations.

Good question. PKCE is an extension and not part of OAuth main (from what I understand) so I imagine that state and the PKCE logic allows for some more validation that a server and client can allow for verification, like being able to choose what kind of hashing algorithm in advance (my implementation opts for S512).

Whew. https://www.roguelazer.com/2020/07/etcd-or-why-modern-software-makes-me-sad/. I wish this (the effects of working at FAANG/MANGA companies and spreading those corporate processes into other places only because of some idea that "it's the best way to do things") was something more people pushed against. It definitely happened at Lyft when I worked there with the influx of Google and Facebook developers basically strong arming internal services to use more Google-y shit.

byhttps://jacky.wtf • posted archived copycurrent

Got to that via https://mmapped.blog/posts/17-scaling-rust-builds-with-bazel.html, another good read about a journey of different build processes for Rust (which ironically promotes using Google's build system, Bazel).

Whew. https://www.roguelazer.com/2020/07/etcd-or-why-modern-software-makes-me-sad/. I wish this (the effects of working at FAANG/MANGA companies and spreading those corporate processes into other places only because of some idea that "it's the best way to do things") was something more people pushed against. It definitely happened at Lyft when I worked there with the influx of Google and Facebook developers basically strong arming internal services to use more Google-y shit.

Fuck publishing houses for using copyright as an angle to control what gets to be put into mainstream circulation. https://arstechnica.com/tech-policy/2023/03/book-publishers-with-surging-profits-struggle-to-prove-internet-archive-hurt-sales/

byhttps://jacky.wtf • posted archived copycurrent

It's like charging people for bottled water (or fucking bottling water and not building public infra to distribute water to whomever wants and need it - because you need it to stay alive) but in the realm of knowledge. Fuck them.

I do think that the same level of deregulation that Nixon and Reagan pushed for allowed for the heavy proliferation of abusive services (yes, generative AI is abusive since it doesn't have a concept of consent and one of the prevailing issues of networked services and vulnerable people is consent) like big pharma, telecommunications and food was the natural result of AI like this.

byhttps://jacky.wtf • posted archived copycurrent

The only true benefits of these events are the profits of private companies and officials that can yo-yo between private and public sectors (like DAGs that go from Lockheed to Purdue or public CTOs that've worked in companies that employed violent systems on civilians, like Raytheon or Palantir). Not something that's really reported (gags are held in media for "reasons", if not shot down) in commerical media as well.