Reason

A couple of days ago I linked to a post by Robin Sloan called Is it okay?, saying:

Robin takes a fair and balanced look at the ethics of using large language models.

That’s how it came across to me: fair and balanced.

Robin’s central question is whether the current crop of large language models might one day lead to life-saving super-science, in which case, doesn’t that outweigh the damage they’re doing to our collective culture?

Baldur wrote a response entitled Knowledge tech that’s subtly wrong is more dangerous than tech that’s obviously wrong. (Or, where I disagree with Robin Sloan).

Baldur pointed out that one side of the scale that Robin is attempting to balance is based on pure science fiction:

There is no path from language modelling to super-science.

Robin responded pointing out that some things that we currently have would have seemed like science fiction a few years ago, right?

Well, no. Baldur debunks that in a post called Now I’m disappointed.

(By the way, can I just point out how great it is to see a blog-to-blog conversation like this, regardless of how much they might be in disagreement.)

Baldur kept bringing the receipts. That’s when it struck me that Robin’s stance is largely based on vibes, whereas Baldur’s viewpoint is informed by facts on the ground.

In a way, they’ve got something in common. They’re both advocating for an interpretation of the precautionary principle, just from completely opposite ends.

Robin’s stance is that if these tools one day yield amazing scientific breakthroughs then that’s reason enough to use them today. It’s uncomfortably close to the reasoning of the effective accelerationist nutjobs, but in a much milder form.

Baldur’s stance is that because of the present harms being inflicted by current large language models, we should be slamming on the brakes. If anything, the harms are going to multiply, not magically reduce.

I have to say, Robin’s stance doesn’t look nearly as fair and balanced as I initially thought. I’m on Team Baldur.

Michelle also weighs in, pointing out the flaw in Robin’s thinking:

AI isn’t LLMs. Or not just LLMs. It’s plausible that AI (or more accurately, Machine Learning) could be a useful scientific tool, particularly when it comes to making sense of large datasets in a way no human could with any kind of accuracy, and many people are already deploying it for such purposes. This isn’t entirely without risk (I’ll save that debate for another time), but in my opinion could feasibly constitute a legitimate application of AI.

LLMs are not this.

In other words, we’ve got a language collision:

We call them “AI”, we look at how much they can do today, and we draw a straight line to what we know of “AI” in our science fiction.

This ridiculous situation could’ve been avoided if we had settled on a more accurate buzzword like “applied statistics” instead of “AI”.

There’s one other flaw in Robin’s reasoning. I don’t think it follows that future improvements warrant present use. Quite the opposite:

The logic is completely backwards! If large language models are going to improve their ethical shortcomings (which is debatable, but let’s be generous), then that’s all the more reason to avoid using the current crop of egregiously damaging tools.

You don’t get companies to change their behaviour by rewarding them for it. If you really want better behaviour from the purveyors of generative tools, you should be boycotting the current offerings.

Anyway, this back-and-forth between Robin and Baldur (and Michelle) was interesting. But it all pales in comparison to the truth bomb that Miriam dropped in her post Tech continues to be political:

When eugenics-obsessed billionaires try to sell me a new toy, I don’t ask how many keystrokes it will save me at work. It’s impossible for me to discuss the utility of a thing when I fundamentally disagree with the purpose of it.

Boom!

Maybe we should consider the beliefs and assumptions that have been built into a technology before we embrace it? But we often prefer to treat each new toy as as an abstract and unmotivated opportunity. If only the good people like ourselves would get involved early, we can surely teach everyone else to use it ethically!

You know what? I could quote every single line. Just go read the whole thing. Please.

Have you published a response to this? :

Responses

Miriam Eric Suzanne

(en français)

You are here, on my personal web log

A note of warning, before you proceed: this is a journal entry, at a difficult time.

It’s very hard to think or act when you can’t tell if you’re about to lose your job, have your research killed off, have your healthcare terminated, witness unstoppable crimes, or just experience extended and apparently unescapable moral injury.

—Erin Kissane, Against Entropy

TL;DR – This is a post about billionaires who love eugenics, support a pro-eugenics government, and sell us a product that they promise will help their longterm eugenic goals. But really this is a post about how I feel, when colleagues treat that product as though it might have merit, if we just give it a chance.

For some reason, I find that opinion to be in bad taste. I know I shouldn’t yuck your yum, or whatever, but I don’t like eugenics.

Reader, it fucks me up.

Chill out, it’s just a tool

For years we’ve been saying that tech is political, and that tech is not neutral. But I don’t know if we’re communicating the full nuance of that adage. It’s not just a warning about bad Apples (or Palantirs) who might use code to dabble in evil extracurriculars. More important to me is the understanding that technologies often carry an ideology inside them:

It is something of an amusing curiosity that some AI models were perplexed by a giraffe without spots. But it’s these same tools and paradigms that enshrine normativity of all kinds, “sanding away the unusual.”

—Ben Myers, I’m a Spotless Giraffe

Tools tend to exist between us and a goal, and the shape of the tool tells us something about how to proceed, and what outcomes are desirable. Tech enacts and shapes our world, our lives, and our politics.

Guns don’t kill people, guns are designed to help people kill people.

Maybe we should consider the beliefs and assumptions that have been built into a technology before we embrace it? But we often prefer to treat each new toy as as an abstract and unmotivated opportunity. If only the good people like ourselves would get involved early, we can surely teach everyone else to use it ethically!

Every tool is a hammer, with context lost to history – and it’s up to us to determine individually what looks like a nail. There is no system, no society, no marketing department, no regulation. Each of us is an island of isolated trolly conductors hammer enthusiasts.

Once we’ve established some useful norms – a ‘best practice’ or two – I can’t imagine anyone [crowd cheers for CEO giving sieg heil salute].

Meanwhile, back at the hammer factory…

The AI projects currently mid-hype are being developed and sold by billionaires and VCs with companies explicitly pursuing surveillance, exploitation, and weaponry. They fired their ethics teams at the start of the cycle, and diverted our attention to a long-term sci-fi narrative about the coming age of machines – a “General Intelligence” that will soon “surpasses” human ability.

Be it god or demon, only the high priests of venture capital can summon and tame such a powerful being for the good of humanity! It will only cost you all your labor (past and present), a reversal on climate policy, and a rather large fortune.

What does that mean? Hand-waving eugenics. We have no way to measure intelligence, no idea what it means to surpass humans, and no reason to believe that ‘intelligence’ might be exponential. Unless you rely on debunked race science, which many of these CEOs seem obsessed with. Now they are eager to jump on board an authoritarian movement that wants to exterminate trans and disabled people, fire black people, and deport all my immigrant friends and colleagues.

It’s wild to see major tech companies throwing out all pretense – giddy to abandon previous commitments around diversity, equity, inclusion, or accessibility. Run free, little mega-corps! Be the evil you’ve always dreamed for the world!

Surely this has nothing to do with their products, though.

But her use-cases

I know that ‘AI’ broadly has a long history, with ‘language models’ and ‘neural nets’ developing real use-cases in science and other fields. I’m not new here. But this background level of validity-by-association is used to prop up absolute garbage. The chatSlop we’re drowning in now is clearly designed and deployed for a different purpose.

Haven’t you heard? They’re building a digital god who will lead us to salvation, uploaded into the virgo supercluster where we can expand the light of exponential profit throughout the cosmos! This is the actual narrative of several AI CEOs, despite being easy to dismiss as hyperbolic nonsense. Why won’t I focus on the actual use-cases?

Why won’t you focus on the actual documented harms? Somehow there is always room for people to dismiss concerns as “overblown and unfounded” past the first attempted coup, and well into an authoritarian power grab.

But the bigger issue is that they don’t have to be successful to be dangerous. Because along the way, these companies get to steal our work and sell it back to us, lower our wages, de-skill our field, bury us in slop, and mire us in algorithmic bureaucracy. If the long-term space god thing doesn’t work out, at least they can make a profit in the short-term.

The beliefs of these CEOs aren’t incidental to the AI product they’re selling us. These are not tools designed for us to benefit from, but tools designed to exploit us. To poison our access to jobs, and our access to information at the same time.

I said on social media that people believe what chatbots tell them, and I was laughed at. No one would trust a chatbot, silly! That same day, several different friends and colleagues quoted the output of an ‘AI’ to me in unrelated situations, as though quoting reliable facts.

So now a select few companies run by billionaires control much of the information that people see – “summarized” without sources. Meanwhile, there’s an oligarchy taking power in the US. Meanwhile, Grok’s entire purpose is to be ‘anti-woke’ and anti-trans, ChatGPT’s political views are shifting right, and Anthropic is partnering with Palantir.

Seems chill. I bet ‘agents’ are cool.

Wouldn’t want to eat a shrimp cocktail in the rain.

Tech workers seem to like tech actually

There’s a meme that goes around regularly, about the attitudes of tech enthusiasts vs tech workers

Tech enthusiasts: My entire house is smart.

Tech workers: The only piece of technology in my house is a printer and I keep a gun next to it so I can shoot it if it makes a noise I don’t recognize.

Pranay Pathole

I can relate to that sentiment, but many in our community seem unfazed or even excited about ‘AI’ and ‘agents’ and ‘codegen’ and all the rest of it. As far as I can tell, most of our industry is still on board with the project, even while protesting the changes in corporate politics, or occasionally complaining about the most obvious over-use. There are certainly a number of people raising alarms or expressing frustration, but we’re often dismissed as uninformed.

Based on every conference I’ve attended over the last year, I can absolutely say we’re a fringe minority. And it’s wearing me out. I don’t know how to participate in a community that so eagerly brushes aside the active and intentional/foundational harms of a technology. In return for what? Faster copypasta? Automation tools being rebranded as an “agentic” web? Assurance that we won’t be left behind?

This is your opportunity to get in at the ground floor!

I don’t know how to attend conferences full of gushing talks about the tools that were designed to negate me. That feels so absurd to say. I don’t have any interest in trying to reverse-engineer use-cases for it, or improve the flaws to make it “better”, or help sell it by bending it to new uses.

When eugenics-obsessed billionaires try to sell me a new toy, I don’t ask how many keystrokes it will save me at work. It’s impossible for me to discuss the utility of a thing when I fundamentally disagree with the purpose of it.

I don’t care how well their ‘AI’ works – or if you found a fancy fun use-case. It fucks me up watching peers treat this tech from people who want to eradicate me as a future worth considering. I don’t want any of this.

I don’t need an agent, I want to maintain my own agency.

I don’t know

I used to see the AI bubble and trans rights as distinct issues. I no longer do. The fascist movement in tech has truly metastasized, as evidenced by Elon Musk’s personal coup, his endless supply of techbro supporters, tech companies’ eagerness to axe DEI programs once Trump gave them an excuse, erasure of queer lives from tech products, etc.

To the extent that AI marketing is an attempt to enclose and commodify culture, and thus to concentrate political power, I see it as a kind of fascism.

Cassandra Granade

I know the anti-DEI(A) sea-change in mega-corp C-suites doesn’t reflect the desires of my friends and colleagues now working for (surprise!) AI arms dealers, but just trying to do their best for open web standards. I don’t know what I would do in that situation. Labor and capital are often at odds. I imagine we all deserve a tech union. But I worry about how few people seem to see the need for it.

Every time I log on I feel like I’m being gaslit – asked to train my shitty replacement, and then step aside. The future is not women, I’m learning now. You can be sued in the US for intentionally hiring women. The future is actually inhuman word synthesizers.

Oh no, I was tricked by the genders and their sneaky ideology! Now I’m a crime! Haha, oops!

Work is already harder to find, and companies mostly want help slopping more slop into the slop machine. Because it will help users, you ask? Of course not! Because everyone now has slop on-tap, and needs to turn that flow of garbage into a cash-flow!

That’s the trouble with tribbles. Money, gain, profit!

What are we doing here? What am I doing here? How do I stay engaged in this field, and keep paying my bills, without feeling like a constant outsider – about to be dismissed from my career? I know I’m not the only one feeling this way, but the layering of threats and betrayals add up. It feels so isolating.

It’s probably good to get this clarity

“Tech” was always a vague and hand-waving field – a way to side-step regulations while starting an unlicensed taxi company or hotel chain. That was never my interest.

But I got curious about the web, a weird little project built for sharing research between scientists. And I still think this web could be pretty cool, actually, if it wasn’t trapped in the clutches of big tech. If we can focus on the bits that make it special – the bits that make it unwieldy for capitalism:

Large companies find HTML & CSS frustrating “at scale” because the web is a fundamentally anti-capitalist mashup art experiment, designed to give consumers all the power.

—Me, before all this

What are we going to build now – those of us who still care about diversity, equity, inclusion, accessibility, and giving consumers the power? Can we still put our HTML & CSS to good use? Can we get back to building a web where people have agency instead of inhuman agents?

Where are you looking to put your energy next?

Addendum, 2025-02-16

I’ve been spending a lot of time in the pottery studio instead of keeping up with my RSS feed – so I wasn’t aware of the most recent AI discourse. Jeremy Keith does a great job putting my thoughts in context of a larger blogging conversation. I recommend reading that summary, and the excellent linked posts by Baldur Bjarnason and Michelle Barker.

I find it particularly troubling the way we talk about current harms of current technology as temporary and therefor insignificant – as though something being “solvable” means that it’s basically solved already, and we shouldn’t worry about it. The logic seems so obviously backwards to me. Solve the problems first, if they are so easily solvable.

This is often used to dismiss the current energy use of LLMs, but also a common rhetorical trick of CEOs as they lay off their workforce. Don’t worry, your current unemployment could someday be solved with a universal basic income. Please ignore the harms of capitalism as we weaponize it against you – because socialism could eventually make it better!

And yet (surprise!) when the tech titans take over government institutions, they don’t seem to have much interest for improving social safety nets. It’s almost (almost) like their goal is to weaken the bargaining power of labor, and they don’t consider this a flaw in the first place.

In our marketing-department imagined future of a new technology all harms will somehow disappear (details TBD), but the potential benefits are endless and extraordinary. We could cure cancer! But are any of the AI companies trying to cure cancer, as a primary goal of their work? Well, no…

Step 2 may be actively harmful, and step 3 might be perpetually absent, but the profit described by step 4 is undeniable. Critics always lack the proper imagination.

Mia (web luddite)

@adactio Thank you, friend. I wasn’t even aware of all this context – I’ve been away from my RSS feed for a bit – so I have some reading to do.

www.gyford.com

A great collection of posts discussing the good/bad of AI/LLMs. I agree with everything Jeremy, Baldur, Michelle and Miriam say here.

# Saturday, February 15th, 2025 at 5:06pm

Large Heydon Collider

@adactio Even if LLMs did lead to massive scientific breakthroughs (which is indeed total conjecture) they wouldn’t address extant power dynamics meaning these breakthroughs would only benefit the few extremely rich people who should already be hanging out of guillotines. So there’s that.

7 Shares

# Shared by smt on Friday, February 14th, 2025 at 5:30pm

# Shared by Thomas Beduneau on Friday, February 14th, 2025 at 6:41pm

# Shared by naz on Friday, February 14th, 2025 at 8:14pm

# Shared by Keith J Grant on Friday, February 14th, 2025 at 11:47pm

# Shared by Fynn Ellie Becker on Sunday, February 16th, 2025 at 8:59pm

# Shared by Evil Jim O’Donnell on Sunday, February 16th, 2025 at 10:35pm

# Shared by björn 🐻 on Sunday, February 16th, 2025 at 11:59pm

21 Likes

# Liked by Andy McMillan on Friday, February 14th, 2025 at 5:18pm

# Liked by Scott Jehl on Friday, February 14th, 2025 at 5:18pm

# Liked by Jerry Orr on Friday, February 14th, 2025 at 5:30pm

# Liked by Jordi Sánchez on Friday, February 14th, 2025 at 5:48pm

# Liked by Thomas Beduneau on Friday, February 14th, 2025 at 6:41pm

# Liked by Daniel Burka on Friday, February 14th, 2025 at 7:46pm

# Liked by Simon Foster on Friday, February 14th, 2025 at 8:14pm

# Liked by Florian Geierstanger on Friday, February 14th, 2025 at 8:14pm

# Liked by Baldur Bjarnason on Friday, February 14th, 2025 at 9:19pm

# Liked by Mia (web luddite) on Friday, February 14th, 2025 at 9:19pm

# Liked by vini on Friday, February 14th, 2025 at 9:46pm

# Liked by Andy Davies on Friday, February 14th, 2025 at 10:16pm

# Liked by Ondřej Pokorný on Friday, February 14th, 2025 at 11:47pm

# Liked by Keith J Grant on Friday, February 14th, 2025 at 11:47pm

# Liked by Mike McCaffrey on Saturday, February 15th, 2025 at 12:19am

# Liked by Fatih Altinok on Saturday, February 15th, 2025 at 6:59am

# Liked by Adam Perfect on Saturday, February 15th, 2025 at 5:20pm

# Liked by Fynn Ellie Becker on Sunday, February 16th, 2025 at 8:59pm

# Liked by Darren Cadwallader on Sunday, February 16th, 2025 at 9:35pm

# Liked by Evil Jim O’Donnell on Sunday, February 16th, 2025 at 10:35pm

# Liked by Large Heydon Collider on Sunday, February 16th, 2025 at 10:35pm

Related posts

Filters

A web by humans, for humans.

Trust

How to destroy your greatest asset with AI.

InstAI

I object.

Continuous partial ick

Voigt-Kampff.

Creativity

Thinking about priorities at UX Brighton.

Related links

In the Future All Food Will Be Cooked in a Microwave, and if You Can’t Deal With That Then You Need to Get Out of the Kitchen – Random Thoughts

A microwave isn’t going to take your job; a chef who knows how to use a microwave is going to take your job.

Tagged with

Vibe code is legacy code | Val Town Blog

When you vibe code, you are incurring tech debt as fast as the LLM can spit it out. Which is why vibe coding is perfect for prototypes and throwaway projects: It’s only legacy code if you have to maintain it!

The worst possible situation is to have a non-programmer vibe code a large project that they intend to maintain. This would be the equivalent of giving a credit card to a child without first explaining the concept of debt.

If you don’t understand the code, your only recourse is to ask AI to fix it for you, which is like paying off credit card debt with another credit card.

Tagged with

A human review | Trys Mudford

Following on from my earlier link about AI etiquette, what Trys experienced here is utterly deflating:

I spent a couple of hours working through my notes and writing up a review before sending it to my manager, awaiting their equivalent review for me.

However, the review I received back was, quite simply, quintessential AI slop.

When slopagandists talk about “AI” boosting productivity, this is the kind of shite they’re talking about.

Tagged with

Butlerian Jihad

This page collects my blog posts on the topic of fighting off spam bots, search engine spiders and other non-humans wasting the precious resources we have on Earth.

Tagged with

It’s rude to show AI output to people | Alex Martsinovich

For the longest time, writing was more expensive than reading. If you encountered a body of written text, you could be sure that at the very least, a human spent some time writing it down. The text used to have an innate proof-of-thought, a basic token of humanity.

Now, AI has made text very, very, very cheap. … Any text can be AI slop. If you read it, you’re injured in this war. You engaged and replied – you’re as good as dead. The dead internet is not just dead it’s poisoned.

I think that realistically, our main weapon in this war is AI etiquette.

Tagged with

Previously on this day

3 years ago I wrote The first four speakers for UX London 2023

Drumroll please… Imran Afzal, Vimla Appadoo, Daniel Burka, and Mansi Gupta are all speaking!

5 years ago I wrote The moment after eclipse

Reading Brian Aldiss.

23 years ago I wrote Cre@teOnline Ce@sesCirculation

The editor of Cre@teOnline explains why the magazine is closing.

23 years ago I wrote Netscape DevEdge Redesigns As Standards Showcase

Eric Meyer and the gang have revamped the Netscape DevEdge site with Cascading Style Sheets.

24 years ago I wrote Back to school

I just noticed from my referrer logs that this site is listed in the "References" section for a course being taught at Penn State.

24 years ago I wrote Geek Love

Awww… isn’t that cute? Commander Taco proposed to his girlfriend on the front page of Slashdot.

24 years ago I wrote Move Over, BT: He Invented Links

It’s nice to see that most people seem to share my disgust at British Telecom’s spurious patent on hyperlinks which will probably get laughed out of court.

24 years ago I wrote Hefty bill with added insult shocks Telecom customer

This is not the sort of thing you want to see on your ‘phone bill: