What price?

I’ve noticed a really strange justification from people when I ask them about their use of generative tools that use large language models (colloquially and inaccurately labelled as artificial intelligence).

I’ll point out that the training data requires the wholesale harvesting of creative works without compensation. I’ll also point out the ludicrously profligate energy use required not just for the training, but for the subsequent queries.

And here’s the thing: people will acknowledge those harms but they will justify their actions by saying “these things will get better!”

First of all, there’s no evidence to back that up.

If anything, as the well gets poisoned by their own outputs, large language models may well end up eating their own slop and getting their own version of mad cow disease. So this might be as good as they’re ever going to get.

And when it comes to energy usage, all the signals from NVIDIA, OpenAI, and others are that power usage is going to increase, not decrease.

But secondly, what the hell kind of logic is that?

It’s like saying “It’s okay for me to drive my gas-guzzling SUV now, because in the future I’ll be driving an electric vehicle.”

The logic is completely backwards! If large language models are going to improve their ethical shortcomings (which is debatable, but let’s be generous), then that’s all the more reason to avoid using the current crop of egregiously damaging tools.

You don’t get companies to change their behaviour by rewarding them for it. If you really want better behaviour from the purveyors of generative tools, you should be boycotting the current offerings.

I suspect that most people know full well that the “they’ll get better!” defence doesn’t hold water. But you can convince yourself of anything when everyone around is telling you that this is the future baby, and you’d better get on board or you’ll be left behind.

Baldur reminds us that this is how people talked about asbestos:

Every time you had an industry campaign against an asbestos ban, they used the same rhetoric. They focused on the potential benefits – cheaper spare parts for cars, cheaper water purification – and doing so implicitly assumed that deaths and destroyed lives, were a low price to pay.

This is the same strategy that’s being used by those who today talk about finding productive uses for generative models without even so much as gesturing towards mitigating or preventing the societal or environmental harms.

It reminds me of the classic Ursula Le Guin short story, The Ones Who Walk Away from Omelas that depicts:

…the utopian city of Omelas, whose prosperity depends on the perpetual misery of a single child.

Once citizens are old enough to know the truth, most, though initially shocked and disgusted, ultimately acquiesce to this one injustice that secures the happiness of the rest of the city.

It turns out that most people will blithely accept injustice and suffering not for a utopia, but just for some bland hallucinated slop.

Don’t get me wrong: I’m not saying large language models aren’t without their uses. I love seeing what Simon and Matt are doing when it comes to coding. And large language models can be great for transforming content from one format to another, like transcribing speech into text. But the balance sheet just doesn’t add up.

As Molly White put it: AI isn’t useless. But is it worth it?:

Even as someone who has used them and found them helpful, it’s remarkable to see the gap between what they can do and what their promoters promise they will someday be able to do. The benefits, though extant, seem to pale in comparison to the costs.

Have you published a response to this? :

Responses

Fifi Lamoura

@baldur A lot of people don’t really care about their own integrity or perhaps never really had the chance to develop any in the first place. Being a consumer and not a maker is part of this I suspect, which is why they can so easily be sold the illusion of being creative. Also, some people (especially a lot of men who never got taught to care or share) are very entitled and selfish. Toxic masculinity isn’t all guns and rolling coal, it’s also smug patriarchal entitlement to exploit and disregard the consequences (you see this among men who think they’re more rational than everyone else, they are often HIGHLY self unaware people who are convinced they’re the good guy because their bigotry is pseudo-polite).

1337 $#!+ I did that

@baldur it is not going to get better because it is already pretty clear that copyright protections don’t apply to the training scenario. So you can 1) advocate for Congress to do something about this (I support this but am not sanguine) or 2) move on to new forms. Maybe text, image, audio, moving image, even code - all for which there are copious training sets (do you have a music library?) are all old media and it is time for creatives to move on. (As an artist, I am not afraid!)

1337 $#!+ I did that

@baldur what do I mean about moving on? Things like: live music and performance, site specificity, making, deep hang outs, going outside, getting a kiln and playing, and for me, distinguishing between artists who use models trained by others from artists who gather our own data and train our own models. #artisinalAI and collaborating with #AI (our new strangers… I actually welcome them.)

Maybe I will be among the early to bring a project with my own data and training:-) ?

ai artisinalai

Fifi Lamoura

@stalbaum @baldur The main reason not to be afraid as an artist is that so-called AI is an illustration machine not an art one and art hasn’t been about “realistic representation” since cameras were invented. There’ll be a couple of people (and are already) making bespoke AI artworks but I highly suspect that will also fizzle out to a large degree as novelty wanes (as is often the case with technological tools). This isn’t really that interesting a space in terms of technological/digital artworks or aesthetics* in my opinion but that doesn’t mean that some interesting work can’t be made (generally by subverting and breaking the technology).

*Other than talking about the aesthetics of fascism and it’s relationship to fantasy illustration and hyperrealism.

Rian

@adactio Hmm… so mad LLM disease… MLLM disease has nice symmetry… but “Mad Slop Disease” has a certain pungent ring to it

# Posted by Rian on Tuesday, September 10th, 2024 at 5:50pm

Thomas Vander Wal

@adactio I really like this piece. A lot. I use Claude most often for coding assistance for my personal projects trying to get my data analytics chops in better shape again. Anthropic says the right things about their models being far less computationally expensive and sustainability is one of the key factors in their LLM product’s product decisions. Claude warns uses about the costs and efficient practices.

But, I’ve watched teams running multiples of these tools poorly and with poor results.

Thomas Vander Wal

@adactio When this piece popped on yesterday I thought it was going to be about web development and poor practices with crazy sustainability costs associated with them. But, no…

The past week or two I’ve been digging for data and finding sites with full datasets in my browser in >2MB JSON strings. But, >2MB in front end frameworks (multiples for one page) for layout and light design. Then looking at >3MB in ads.

This is utterly crazy.

adactio.com

A couple of days ago I linked to a post by Robin Sloan called Is it okay?, saying:

Robin takes a fair and balanced look at the ethics of using large language models.

That’s how it came across to me: fair and balanced.

Robin’s central question is whether the current crop of large language models might one day lead to life-saving super-science, in which case, doesn’t that outweigh the damage they’re doing to our collective culture?

Baldur wrote a response entitled Knowledge tech that’s subtly wrong is more dangerous than tech that’s obviously wrong. (Or, where I disagree with Robin Sloan).

Baldur pointed out that one side of the scale that Robin is attempting to balance is based on pure science fiction:

There is no path from language modelling to super-science.

Robin responded pointing out that some things that we currently have would have seemed like science fiction a few years ago, right?

Well, no. Baldur debunks that in a post called Now I’m disappointed.

(By the way, can I just point out how great it is to see a blog-to-blog conversation like this, regardless of how much they might be in disagreement.)

Baldur kept bringing the receipts. That’s when it struck me that Robin’s stance is largely based on vibes, whereas Baldur’s viewpoint is informed by facts on the ground.

In a way, they’ve got something in common. They’re both advocating for an interpretation of the precautionary principle, just from completely opposite ends.

Robin’s stance is that if these tools one day yield amazing scientific breakthroughs then that’s reason enough to use them today. It’s uncomfortably close to the reasoning of the effective accelerationist nutjobs, but in a much milder form.

Baldur’s stance is that because of the present harms being inflicted by current large language models, we should be slamming on the brakes. If anything, the harms are going to multiply, not magically reduce.

I have to say, Robin’s stance doesn’t look nearly as fair and balanced as I initially thought. I’m on Team Baldur.

Michelle also weighs in, pointing out the flaw in Robin’s thinking:

AI isn’t LLMs. Or not just LLMs. It’s plausible that AI (or more accurately, Machine Learning) could be a useful scientific tool, particularly when it comes to making sense of large datasets in a way no human could with any kind of accuracy, and many people are already deploying it for such purposes. This isn’t entirely without risk (I’ll save that debate for another time), but in my opinion could feasibly constitute a legitimate application of AI.

LLMs are not this.

In other words, we’ve got a language collision:

We call them “AI”, we look at how much they can do today, and we draw a straight line to what we know of “AI” in our science fiction.

This ridiculous situation could’ve been avoided if we had settled on a more accurate buzzword like “applied statistics” instead of “AI”.

There’s one other flaw in Robin’s reasoning. I don’t think it follows that future improvements warrant present use. Quite the opposite:

The logic is completely backwards! If large language models are going to improve their ethical shortcomings (which is debatable, but let’s be generous), then that’s all the more reason to avoid using the current crop of egregiously damaging tools.

You don’t get companies to change their behaviour by rewarding them for it. If you really want better behaviour from the purveyors of generative tools, you should be boycotting the current offerings.

Anyway, this back-and-forth between Robin and Baldur (and Michelle) was interesting. But it all pales in comparison to the truth bomb that Miriam dropped in her post Tech continues to be political:

When eugenics-obsessed billionaires try to sell me a new toy, I don’t ask how many keystrokes it will save me at work. It’s impossible for me to discuss the utility of a thing when I fundamentally disagree with the purpose of it.

Boom!

Maybe we should consider the beliefs and assumptions that have been built into a technology before we embrace it? But we often prefer to treat each new toy as as an abstract and unmotivated opportunity. If only the good people like ourselves would get involved early, we can surely teach everyone else to use it ethically!

You know what? I could quote every single line. Just go read the whole thing. Please.

# Friday, February 14th, 2025 at 5:07pm

34 Shares

# Shared by Mikalai on Tuesday, September 10th, 2024 at 3:08pm

# Shared by Luna Mariana 🏳️‍⚧️🏳️‍🌈 on Tuesday, September 10th, 2024 at 3:08pm

# Shared by Brian Levy on Tuesday, September 10th, 2024 at 3:08pm

# Shared by Johannes on Tuesday, September 10th, 2024 at 3:08pm

# Shared by Guillaume Deblock on Tuesday, September 10th, 2024 at 3:08pm

# Shared by Tobias Kausch on Tuesday, September 10th, 2024 at 3:08pm

# Shared by Thomas 🔭🕹️ on Tuesday, September 10th, 2024 at 3:09pm

# Shared by Chris Drackett :abunhdowohop: on Tuesday, September 10th, 2024 at 3:09pm

# Shared by Christopher Kirk-Nielsen on Tuesday, September 10th, 2024 at 3:09pm

# Shared by Bastian Allgeier on Tuesday, September 10th, 2024 at 3:09pm

# Shared by Dom Arbuthnott on Tuesday, September 10th, 2024 at 3:34pm

# Shared by Dr Manabu Sakamoto (he/him) on Tuesday, September 10th, 2024 at 3:34pm

# Shared by Bundyo on Tuesday, September 10th, 2024 at 4:03pm

# Shared by s:mon on Tuesday, September 10th, 2024 at 4:03pm

# Shared by Terra V. on Tuesday, September 10th, 2024 at 4:03pm

# Shared by Jim Purbrick on Tuesday, September 10th, 2024 at 4:03pm

# Shared by Evil Jim O’Donnell on Tuesday, September 10th, 2024 at 4:03pm

# Shared by Stuart :progress_pride: on Tuesday, September 10th, 2024 at 4:34pm

# Shared by Ben Tsai on Tuesday, September 10th, 2024 at 4:35pm

# Shared by Antonio on Tuesday, September 10th, 2024 at 4:35pm

# Shared by Marty McGuire on Tuesday, September 10th, 2024 at 4:39pm

# Shared by Rowan on Tuesday, September 10th, 2024 at 5:38pm

# Shared by curtosis on Tuesday, September 10th, 2024 at 5:38pm

# Shared by pdebruic on Tuesday, September 10th, 2024 at 6:07pm

# Shared by EverMama8_ on Tuesday, September 10th, 2024 at 8:07pm

# Shared by Rasmus Kaj 🦀 on Tuesday, September 10th, 2024 at 8:07pm

# Shared by Deadly Headshot on Tuesday, September 10th, 2024 at 9:55pm

# Shared by Kolombiken on Wednesday, September 11th, 2024 at 5:24am

# Shared by herr_schaft on Wednesday, September 11th, 2024 at 6:53am

# Shared by Timo Tijhof on Thursday, September 12th, 2024 at 2:27am

# Shared by Sean :emacs: :kubernetes: on Thursday, September 12th, 2024 at 4:50am

# Shared by Donswelt on Thursday, September 12th, 2024 at 7:48am

# Shared by FredC on Friday, September 13th, 2024 at 5:24am

# Shared by Alien Alarms on Saturday, September 14th, 2024 at 5:45pm

29 Likes

# Liked by virginia partridge on Tuesday, September 10th, 2024 at 3:08pm

# Liked by Brian Levy on Tuesday, September 10th, 2024 at 3:08pm

# Liked by Mathijs on Tuesday, September 10th, 2024 at 3:08pm

# Liked by Luke Dorny on Tuesday, September 10th, 2024 at 3:08pm

# Liked by RyanParsley on Tuesday, September 10th, 2024 at 3:08pm

# Liked by Christopher Kirk-Nielsen on Tuesday, September 10th, 2024 at 3:08pm

# Liked by Dr Manabu Sakamoto (he/him) on Tuesday, September 10th, 2024 at 3:34pm

# Liked by Simon St.Laurent on Tuesday, September 10th, 2024 at 3:34pm

# Liked by Martin Auswöger on Tuesday, September 10th, 2024 at 3:34pm

# Liked by Mimir on Tuesday, September 10th, 2024 at 3:34pm

# Liked by Bundyo on Tuesday, September 10th, 2024 at 4:03pm

# Liked by Terra V. on Tuesday, September 10th, 2024 at 4:03pm

# Liked by Jim Purbrick on Tuesday, September 10th, 2024 at 4:03pm

# Liked by aprilfollies on Tuesday, September 10th, 2024 at 4:03pm

# Liked by Ms. Jen on Tuesday, September 10th, 2024 at 4:05pm

# Liked by Ethan Marcotte on Tuesday, September 10th, 2024 at 4:34pm

# Liked by Adam Faircloth on Tuesday, September 10th, 2024 at 5:38pm

# Liked by Hartmut Riedel on Tuesday, September 10th, 2024 at 7:04pm

# Liked by 𝖋𝖑𝖔𝖗𝖎𝖓 on Tuesday, September 10th, 2024 at 8:34pm

# Liked by Mark Reeves on Tuesday, September 10th, 2024 at 8:59pm

# Liked by Deadly Headshot on Tuesday, September 10th, 2024 at 9:55pm

# Liked by Eric deRuiter on Tuesday, September 10th, 2024 at 11:29pm

# Liked by Kolombiken on Wednesday, September 11th, 2024 at 5:24am

# Liked by herr_schaft on Wednesday, September 11th, 2024 at 6:53am

# Liked by FND on Wednesday, September 11th, 2024 at 8:09pm

# Liked by Timo Tijhof on Thursday, September 12th, 2024 at 2:28am

# Liked by anna_lillith 🇺🇦🌱🐖 on Saturday, September 14th, 2024 at 6:19am

# Liked by Alien Alarms on Saturday, September 14th, 2024 at 5:45pm

# Liked by Craig Mod on Friday, February 14th, 2025 at 2:38pm

Related posts

Filters

A web by humans, for humans.

Creativity

Thinking about priorities at UX Brighton.

Crawlers

Pest control for your website.

Permission

You have the power, not Google.

Browser history

From a browser bug this morning, back to the birth of hypertext in 1945, with a look forward to a possible future for web browsers.

Related links

Vibe code is legacy code | Val Town Blog

When you vibe code, you are incurring tech debt as fast as the LLM can spit it out. Which is why vibe coding is perfect for prototypes and throwaway projects: It’s only legacy code if you have to maintain it!

The worst possible situation is to have a non-programmer vibe code a large project that they intend to maintain. This would be the equivalent of giving a credit card to a child without first explaining the concept of debt.

If you don’t understand the code, your only recourse is to ask AI to fix it for you, which is like paying off credit card debt with another credit card.

Tagged with

Vibe coding and Robocop

The short version of what I want to say is: vibe coding seems to live very squarely in the land of prototypes and toys. Promoting software that’s been built entirely using this method would be akin to sending a hacked weekend prototype to production and expecting it to be stable.

Remy is taking a very sensible approach here:

I’ve used it myself to solve really bespoke problems where the user count is one.

Would I put this out to production: absolutely not.

Tagged with

Keeping up appearances | deadSimpleTech

Looking at LLM usage and promotion as a cultural phenomenon, it has all of the markings of a status game. The material gains from the LLM (which are usually quite marginal) really aren’t why people are doing it: they’re doing it because in many spaces, using ChatGPT and being very optimistic about AI being the “future” raises their social status. It’s important not only to be using it, but to be seen using it and be seen supporting it and telling people who don’t use it that they’re stupid luddites who’ll inevitably be left behind by technology.

Tagged with

In 2025, venture capital can’t pretend everything is fine any more – Pivot to AI

Here is the state of venture capital in early 2025:

  • Venture capital is moribund except AI.
  • AI is moribund except OpenAI.
  • OpenAI is a weird scam that wants to burn money so fast it summons AI God.
  • Nobody can cash out.

Tagged with

What I’ve learned about writing AI apps so far | Seldo.com

LLMs are good at transforming text into less text

Laurie is really onto something with this:

This is the biggest and most fundamental thing about LLMs, and a great rule of thumb for what’s going to be an effective LLM application. Is what you’re doing taking a large amount of text and asking the LLM to convert it into a smaller amount of text? Then it’s probably going to be great at it. If you’re asking it to convert into a roughly equal amount of text it will be so-so. If you’re asking it to create more text than you gave it, forget about it.

Depending how much of the hype around AI you’ve taken on board, the idea that they “take text and turn it into less text” might seem gigantic back-pedal away from previous claims of what AI can do. But taking text and turning it into less text is still an enormous field of endeavour, and a huge market. It’s still very exciting, all the more exciting because it’s got clear boundaries and isn’t hype-driven over-reaching, or dependent on LLMs overnight becoming way better than they currently are.

Tagged with

Previously on this day

6 years ago I wrote Request mapping

Something I’d like to see in dev tools.

7 years ago I wrote Robustness and least power

A tale of two principles.

10 years ago I wrote Brighton in September

The digital festival is in full swing.

13 years ago I wrote The mind-blowing awesomeness of dConstruct 2012

Preceded by the mind-blowing awesomeness of Brighton SF.

15 years ago I wrote JavaScript jamboree

Whacky and wonderful JavaScript experiments.

19 years ago I wrote Backnetworking

This is my honour roll: it was an honour to meet these people.

19 years ago I wrote Fables of the dConstruction

A whirlwind weekend of geeky goodness in Brighton.

21 years ago I wrote Farewell to Florida

It’s time for me to head back to Blighty. My stay in Saint Augustine wound up being longer than originally intended but there are worse places to be stranded for a few extra days.

22 years ago I wrote It's alive!

I have my iBook back!