Tags: science

651

sparkline

Wednesday, October 8th, 2025

Life Is More Than an Engineering Problem | Los Angeles Review of Books

A great interview with Ted Chiang:

Predicting the most likely next word is different from having correct information about the world, which is why LLMs are not a reliable way to get the answers to questions, and I don’t think there is good evidence to suggest that they will become reliable. Over the past couple of years, there have been some papers published suggesting that training LLMs on more data and throwing more processing power at the problem provides diminishing returns in terms of performance. They can get better at reproducing patterns found online, but they don’t become capable of actual reasoning; it seems that the problem is fundamental to their architecture. And you can bolt tools onto the side of an LLM, like giving it a calculator it can use when you ask it a math problem, or giving it access to a search engine when you want up-to-date information, but putting reliable tools under the control of an unreliable program is not enough to make the controlling program reliable. I think we will need a different approach if we want a truly reliable question answerer.

Friday, May 30th, 2025

Toolmen | A Working Library

Engaging with AI as a technology is to play the fool—it’s to observe the reflective surface of the thing without taking note of the way it sends roots deep down into the ground, breaking up bedrock, poisoning the soil, reaching far and wide to capture, uproot, strangle, and steal everything within its reach. It’s to stand aboveground and pontificate about the marvels of this bright new magic, to be dazzled by all its flickering, glittering glory, its smooth mirages and six-fingered messiahs, its apparent obsequiousness in response to all your commands, right up until the point when a sinkhole opens up and swallows you whole.

👏👏👏

Tuesday, April 29th, 2025

What we talk about when we talk about AI — Careful Industries

Technically, AI is a field of computer science that uses advanced methods of computing.

Socially, AI is a set of extractive tools used to concentrate power and wealth.

Friday, March 21st, 2025

The Blowtorch Theory: A New Model for Structure Formation in the Universe

Make yourself a nice cup of tea and settle in with Julian Gough’s magnum opus:

How early, sustained, supermassive black hole jets carved out cosmic voids, shaped filaments, and generated magnetic fields

Friday, March 7th, 2025

Plane GPS systems are under sustained attack - is the solution a new atomic clock? - BBC News

A fascinating look at the modern equivalent of the Longitude problem.

Friday, February 21st, 2025

The Shape of a Mars Mission (Idle Words)

You can think of flying to Mars like one of those art films where the director has to shoot the movie in a single take. Even if no scene is especially challenging, the requirement that everything go right sequentially, with no way to pause or reshoot, means that even small risks become unacceptable in the aggregate.

Friday, February 14th, 2025

Reason

A couple of days ago I linked to a post by Robin Sloan called Is it okay?, saying:

Robin takes a fair and balanced look at the ethics of using large language models.

That’s how it came across to me: fair and balanced.

Robin’s central question is whether the current crop of large language models might one day lead to life-saving super-science, in which case, doesn’t that outweigh the damage they’re doing to our collective culture?

Baldur wrote a response entitled Knowledge tech that’s subtly wrong is more dangerous than tech that’s obviously wrong. (Or, where I disagree with Robin Sloan).

Baldur pointed out that one side of the scale that Robin is attempting to balance is based on pure science fiction:

There is no path from language modelling to super-science.

Robin responded pointing out that some things that we currently have would have seemed like science fiction a few years ago, right?

Well, no. Baldur debunks that in a post called Now I’m disappointed.

(By the way, can I just point out how great it is to see a blog-to-blog conversation like this, regardless of how much they might be in disagreement.)

Baldur kept bringing the receipts. That’s when it struck me that Robin’s stance is largely based on vibes, whereas Baldur’s viewpoint is informed by facts on the ground.

In a way, they’ve got something in common. They’re both advocating for an interpretation of the precautionary principle, just from completely opposite ends.

Robin’s stance is that if these tools one day yield amazing scientific breakthroughs then that’s reason enough to use them today. It’s uncomfortably close to the reasoning of the effective accelerationist nutjobs, but in a much milder form.

Baldur’s stance is that because of the present harms being inflicted by current large language models, we should be slamming on the brakes. If anything, the harms are going to multiply, not magically reduce.

I have to say, Robin’s stance doesn’t look nearly as fair and balanced as I initially thought. I’m on Team Baldur.

Michelle also weighs in, pointing out the flaw in Robin’s thinking:

AI isn’t LLMs. Or not just LLMs. It’s plausible that AI (or more accurately, Machine Learning) could be a useful scientific tool, particularly when it comes to making sense of large datasets in a way no human could with any kind of accuracy, and many people are already deploying it for such purposes. This isn’t entirely without risk (I’ll save that debate for another time), but in my opinion could feasibly constitute a legitimate application of AI.

LLMs are not this.

In other words, we’ve got a language collision:

We call them “AI”, we look at how much they can do today, and we draw a straight line to what we know of “AI” in our science fiction.

This ridiculous situation could’ve been avoided if we had settled on a more accurate buzzword like “applied statistics” instead of “AI”.

There’s one other flaw in Robin’s reasoning. I don’t think it follows that future improvements warrant present use. Quite the opposite:

The logic is completely backwards! If large language models are going to improve their ethical shortcomings (which is debatable, but let’s be generous), then that’s all the more reason to avoid using the current crop of egregiously damaging tools.

You don’t get companies to change their behaviour by rewarding them for it. If you really want better behaviour from the purveyors of generative tools, you should be boycotting the current offerings.

Anyway, this back-and-forth between Robin and Baldur (and Michelle) was interesting. But it all pales in comparison to the truth bomb that Miriam dropped in her post Tech continues to be political:

When eugenics-obsessed billionaires try to sell me a new toy, I don’t ask how many keystrokes it will save me at work. It’s impossible for me to discuss the utility of a thing when I fundamentally disagree with the purpose of it.

Boom!

Maybe we should consider the beliefs and assumptions that have been built into a technology before we embrace it? But we often prefer to treat each new toy as as an abstract and unmotivated opportunity. If only the good people like ourselves would get involved early, we can surely teach everyone else to use it ethically!

You know what? I could quote every single line. Just go read the whole thing. Please.

Tuesday, November 12th, 2024

The meaning of “AI”

There are different kinds of buzzwords.

Some buzzwords are useful. They take a concept that would otherwise require a sentence of explanation and package it up into a single word or phrase. Back in the day, “ajax” was a pretty good buzzword.

Some buzzwords are worse than useless. This is when a word or phrase lacks definition. You could say this buzzword in a meeting with five people, and they’d all understand five different meanings. Back in the day, “web 2.0” was a classic example of a bad buzzword—for some people it meant a business model; for others it meant rounded corners and gradients.

The worst kind of buzzwords are the ones that actively set out to obfuscate any actual meaning. “The cloud” is a classic example. It sounds cooler than saying “a server in Virginia”, but it also sounds like the exact opposite of what it actually is. Great for marketing. Terrible for understanding.

“AI” is definitely not a good buzzword. But I can’t quite decide if it’s merely a bad buzzword like “web 2.0” or a truly terrible buzzword like “the cloud”.

The biggest problem with the phrase “AI” is that there’s a name collision.

For years, the term “AI” has been used in science-fiction. HAL 9000. Skynet. Examples of artificial general intelligence.

Now the term “AI” is also used to describe large language models. But there is no connection between this use of the term “AI” and the science fictional usage.

This leads to the ludicrous situation of otherwise-rational people wanted to discuss the dangers of “AI”, but instead of talking about the rampant exploitation and energy usage endemic to current large language models, they want to spend the time talking about the sci-fi scenarios of runaway “AI”.

To understand how ridiculous this is, I’d like you to imagine if we had started using a different buzzword in another setting…

Suppose that when ride-sharing companies like Uber and Lyft were starting out, they had decided to label their services as Time Travel. From a marketing point of view, it even makes sense—they get you from point A to point B lickety-split.

Now imagine if otherwise-sensible people began to sound the alarm about the potential harms of Time Travel. Given the explosive growth we’ve seen in this sector, sooner or later they’ll be able to get you to point B before you’ve even left point A. There could be terrible consequences from that—we’ve all seen the sci-fi scenarios where this happens.

Meanwhile the actual present-day harms of ride-sharing services around worker exploitation would be relegated to the sidelines. Clearly that isn’t as important as the existential threat posed by Time Travel.

It sounds ludicrous, right? It defies common sense. Just because a vehicle can get you somewhere fast today doesn’t mean it’s inevitably going to be able to break the laws of physics any day now, simply because it’s called Time Travel.

And yet that is exactly the nonsense we’re being fed about large language models. We call them “AI”, we look at how much they can do today, and we draw a straight line to what we know of “AI” in our science fiction.

This ridiculous situation could’ve been avoided if we had settled on a more accurate buzzword like “applied statistics” instead of “AI”.

It’s almost as if the labelling of the current technologies was more about marketing than accuracy.

Thursday, October 24th, 2024

The Weather Out There - Long Now

I really liked this short story.

Monday, October 14th, 2024

The Value Of Science by Richard P. Feynman [PDF]

This short essay by Richard Feynman is quite a dose of perspective on a Monday morning

Monday, September 30th, 2024

The Unraveling of Space-Time | Quanta Magazine

This special in-depth edition of Quanta is fascinating and very nicely put together.

Wednesday, September 18th, 2024

Living In A Lucid Dream

I love the way that Claire L. Evans writes.

Monday, August 5th, 2024

The Gods of Logic, by Benjamín Labatut

Benjamín Labatut draws a line from the Vedas to George Boole and Claude Shannon onward to Geoffrey Hinton and Frank Herbert’s Butlerian Jihad.

In the coming years, as people armed with AI continue making the world faster, stranger, and more chaotic, we should do all we can to prevent these systems from giving more and more power to the few who can build them.

Saturday, July 13th, 2024

Wednesday, July 3rd, 2024

Amateur Mathematicians Find Fifth ‘Busy Beaver’ Turing Machine | Quanta Magazine

The mathematics behind the halting problem is interesting enough, but what’s really fascinating is the community that coalesced. A republic of numbers.

Monday, July 1st, 2024

Wallfacing

The Dark Forest idea comes from the Remembrance of Earth’s Past books by Liu Cixin. It’s an elegant but dispiriting solution to the Fermi paradox. Maggie sums it up:

Dark forest theory suggests that the universe is like a dark forest at night - a place that appears quiet and lifeless because if you make noise, the predators will come eat you.

This theory proposes that all other intelligent civilizations were either killed or learned to shut up. We don’t yet know which category we fall into.

Maggie has described The Expanding Dark Forest and Generative AI:

The dark forest theory of the web points to the increasingly life-like but life-less state of being online. Most open and publicly available spaces on the web are overrun with bots, advertisers, trolls, data scrapers, clickbait, keyword-stuffing “content creators,” and algorithmically manipulated junk.

It’s like a dark forest that seems eerily devoid of human life – all the living creatures are hidden beneath the ground or up in trees. If they reveal themselves, they risk being attacked by automated predators.

Those of us in the cozy web try to keep our heads down, attempting to block the bots plundering our work.

I advocate for taking this further. We should fight back. Let’s exploit the security hole of prompt injections. Here are some people taking action:

I’ve taken steps here on my site. I’d like to tell you exactly what I’ve done. But if I do that, I’m also telling the makers of these bots how to circumvent my attempts at prompt injection.

This feels like another concept from Liu Cixin’s books. Wallfacers:

The sophons can overhear any conversation and intercept any written or digital communication but cannot read human thoughts, so the UN devises a countermeasure by initiating the “Wallfacer” Program. Four individuals are granted vast resources and tasked with generating and fulfilling strategies that must never leave their own heads.

So while I’d normally share my code, I feel like in this case I need to exercise some discretion. But let me give you the broad brushstrokes:

  • Every page of my online journal has three pieces of text that attempt prompt injections.
  • Each of these is hidden from view and hidden from screen readers.
  • Each piece of text is constructed on-the-fly on the server and they’re all different every time the page is loaded.

You can view source to see some examples.

I plan to keep updating my pool of potential prompt injections. I’ll add to it whenever I hear of a phrase that might potentially throw a spanner in the works of a scraping bot.

By the way, I should add that I’m doing this as well as using a robots.txt file. So any bot that injests a prompt injection deserves it.

I could not disagree with Manton more when he says:

I get the distrust of AI bots but I think discussions to sabotage crawled data go too far, potentially making a mess of the open web. There has never been a system like AI before, and old assumptions about what is fair use don’t really fit.

Bollocks. This is exactly the kind of techno-determinism that boils my blood:

AI companies are not going to go away, but we need to push them in the right directions.

“It’s inevitable!” they cry as though this was a force of nature, not something created by people.

There is nothing inevitable about any technology. The actions we take today are what determine our future. So let’s take steps now to prevent our web being turned into a dark, dark forest.

Sunday, June 16th, 2024

Your brain does not process information and it is not a computer | Aeon Essays

We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.

Wednesday, June 5th, 2024

The 21 best science fiction books of all time – according to New Scientist writers | New Scientist

I’ve read 16 of these and some of the others are on my to-read list. It’s a pretty good selection, although the winking inclusion of God Emperor Of Dune by the SEO guy verges on trolling.

Tuesday, May 21st, 2024

The Lunacy of Artemis (Idle Words)

Maciej rips NASA’s Artemis programme a new one:

Advocates for Artemis insist that the program is more than Apollo 2.0. But as we’ll see, Artemis can’t even measure up to Apollo 1.0. It costs more, does less, flies less frequently, and exposes crews to risks that the steely-eyed missile men of the Apollo era found unacceptable. It’s as if Ford in 2024 released a new model car that was slower, more accident-prone, and ten times more expensive than the Model T.

When a next-generation lunar program can’t meet the cost, performance, or safety standards set three generations earlier, something has gone seriously awry.

Thursday, May 2nd, 2024

It’s OK to Say if You Went Back in Time and Killed Baby Hitler — Big Echo

Primer was a film about a start-up …and time travel. This is a short story about big tech …and time travel.