Tags: automation

74

sparkline

Tuesday, April 16th, 2024

The dancing bear, part 1

I don’t believe the greatest societal risk is that a sentient artificial intelligence is going to kill us all. I think our undoing is simpler than that. I think that most of our lives are going to be shorter and more miserable than they could have been, thanks to the unchecked greed that’s fed this rally. (Okay, this and crypto.)

I like this analogy:

AI is like a dancing bear. This was a profitable sideshow dating back to the middle ages: all it takes is a bear, some time, and a complete lack of ethics. Today, our carnival barkers are the AI startups and their CEOs. They’re trying to convince you that if they can show you a bear that can dance, then you’ll believe it can draw, write coherent sentences, and help you with your app’s marketing strategy.

Part of the curiosity of a dancing bear is the implicit risk that it’ll remember at some point that it’s a bear, and maul whoever is nearby. The fear is a selling point. Likewise, some AI vendors have even learned that the product is more compelling if it’s perceived as dangerous. It’s common for AI startup execs to say things like, “of course there’s a real risk that an army of dancing bears will eventually kill us all. Anyway, here’s what we’re working on…” How brave of them.

Tuesday, March 19th, 2024

The growing backlash against AI

You are not creative and then create something, you become creative by working on something, creativity is a byproduct of work.

In this way “AI” is deeply dehumanizing: Making the spaces and opportunities for people to grow and be human smaller and smaller. Applying a straitjacket of past mediocrity to our minds and spirits.

And that is what is being booed: The salespeople of mediocrity who’ve made it their mission to speak lies from power. The lie that only tech can and will save us. The lie that a bit of statistics and colonial, mostly white, mostly western data is gonna create a brilliant future. The lie that we have no choice, no alternatives.

Sunday, March 3rd, 2024

On Nielsen’s ideas about generative UI for resolving accessibility

Per Axbom quite rightly tears Jakob Nielsen a new one.

I particularly like his suggestion that you re-read Nielsen’s argument but replace the word “accessibility” with “usability”:

Assessed this way, the accessibilityusabiity movement has been a miserable failure.

AccessibilityUsability is too expensive for most companies to be able to afford everything that’s needed with the current, clumsy implementation.

Thursday, February 8th, 2024

How independent writers are turning to AI

I missed this article when it was first published, but I have to say this is some truly web-native art direction: bravo!

Thursday, January 25th, 2024

MastoFeed - Send your RSS Feeds to Mastodon

This looks like a handy RSS-to-Mastodon service.

Saturday, January 13th, 2024

Why Would I Buy This Useless, Evil Thing? - Aftermath

To be honest, you can skip the “review”, but I just had to link to this for the perfection of the opening three sentences, which sum of my feelings exactly:

I resent AI. Not AI itself–that’s just code, despite what tech guys with flashlights under their chins tell you. I resent the imposition, the idea that since LLMs exist, it follows that they should exist in every facet in my life.

Wednesday, January 10th, 2024

Sunday, January 7th, 2024

Clippy returned (as an unnecessary “AI”) | hidde.blog

Personally, I want software to push me not towards reusing what exists, but away from that (and that’s harder). Whether I’m producing a plan or hefty biography, push me towards thinking critically about the work, rather than offering a quick way out.

Wednesday, January 3rd, 2024

LLMs and Programming in the first days of 2024

What strikes me about my personal experience with LLMs is that I have learned precisely when to use them and when their use would only slow me down. I have also learned that LLMs are a bit like Wikipedia and all the video courses scattered on YouTube: they help those with the will, ability, and discipline, but they are of marginal benefit to those who have fallen behind. I fear that at least initially, they will only benefit those who already have an advantage.

Tuesday, December 19th, 2023

Don’t Let the Robots Get You Down

If you do work that is hard, kind of a grind sometimes, and involves lots of little and small decisions, I think you’re pretty safe for a while. As a computer person who has spent a lot of this year messing with AI, and someone who has kept an eye on AI promises for decades, the things they’re saying about the future seem really far away. There’s tons of progress ahead, but it’s not a mistake to get a mortgage or plan a vacation.

Wednesday, November 29th, 2023

Losing the imitation game

The hard part of programming is building and maintaining a useful mental model of a complex system. The easy part is writing code. They’re positioning this tool as a universal solution, but it’s only capable of doing the easy part. And even then, it’s not able to do that part reliably. Human engineers will still have to evaluate and review the code that an AI writes. But they’ll now have to do it without the benefit of having anyone who understands it. No one can explain it. No one can explain what they were thinking when they wrote it. No one can explain what they expect it to do. Every choice made in writing software is a choice not to do things in a different way. And there will be no one who can explain why they made this choice, and not those others. In part because it wasn’t even a decision that was made. It was a probability that was realized.

This post also has a really good explanation of how large language models work.

There may be real, productive uses for these kinds of tools. There may be ways to build and deploy them ethically and sustainably. But that’s not the situation with the instances we have. AI, as it’s been built today, is a tool to sell out our collective futures in order to enrich already wealthy people. They like to frame it as being akin to nuclear science. But we should really see it as being more like fossil fuels.

Tuesday, November 14th, 2023

Benjamin Parry~ Writing ~ Marking the homework of a twelve year old ~ @benjaminparry

Don’t get me wrong, there are some features under the mislabeled bracket of AI that have made a huge impact and improvement to my process. Audio transcription has been an absolute game-changer to research analysis, reimbursing me hours of time to focus on the deep thinking work. This is a perfect example of a problem seeking a solution, not the other way around. The latest wave of features feel a lot like because we can rather than we should, because.

A Coder Considers the Waning Days of the Craft | The New Yorker

GPT-4 is impressive, but a layperson can’t wield it the way a programmer can. I still feel secure in my profession. In fact, I feel somewhat more secure than before. As software gets easier to make, it’ll proliferate; programmers will be tasked with its design, its configuration, and its maintenance. And though I’ve always found the fiddly parts of programming the most calming, and the most essential, I’m not especially good at them. I’ve failed many classic coding interview tests of the kind you find at Big Tech companies. The thing I’m relatively good at is knowing what’s worth building, what users like, how to communicate both technically and humanely. A friend of mine has called this A.I. moment “the revenge of the so-so programmer.” As coding per se begins to matter less, maybe softer skills will shine.

Sunday, November 12th, 2023

CSS { In Real Life } | Stop Using AI-Generated Images

I have yet to meet anyone who wants to hang AI art on their walls (although I fully expect to see it in hotel chains).

Monday, October 23rd, 2023

The map-reduce is not the territory

Unlike many people, I’m not particularly worried about AI replacing peoples’ jobs, although employers will certainly try and use it to reduce their headcount. I’m more worried about it transforming jobs into roles without agency or space to be human. Imagine a world where performance reviews are conducted by software; where deviance from the norm is flagged electronically, and where hiring and firing can be performed without input from a human. Imagine models that can predict when unionization is about to occur in a workplace. All of this exists today, but in relatively experimental form. Capital needs predictability and scale; for most jobs, the incentives are not in favor of human diversity and intuition.

Tuesday, October 17th, 2023

Decision time

I’ve always associated good design with thoughtfulness. Like, I should be able to point to any element in an interface and the designer should be able to tell me the reasons it’s there. Those reasons may be rooted in user needs or asthetics or some other consideration, but the point is that there’s a justification for it. Justify every pixel!

But I’ve come to realise that this is a bit reductionist. Now when I point at an interface element, I still expect the designer to be able to justify its inclusion, but I’d also like to know the trade-offs that were made.

Suppose there’s a large hero image. I’m sure the designer would have no problem justifying its inclusion on the basis of impact and the emotional heft it delivers. But did they also understand the potential downsides? Were they aware of the performance implications of including a large image?

I hope the answer to both questions is yes. They understood the costs, but they decided that, on balance, the positives outweighed the negatives.

When it comes to the positives, universal principles of design often apply. Colour theory, typography, proximity, and so on. But the downsides tend to be specific to the medium that the design is delivered in.

Let’s say you’re designing for print. You want to include an extra typeface just for footnotes. No problem. There isn’t really a downside. In print, you can use all the typefaces you want. But if this were for the web, then the calculation would be different. Every extra typeface comes with a performance penalty. A decision that might be justified in one medium might not work in another medium.

It works both ways; on the web you can use all the colours you want, without incurring any penalties, but in print—depending on the process you’re using—you might have to weigh up that decision very differently.

From this perspective, every design decision is like a balance sheet. A good web designer understands the benefits and the costs behind each decision they make.

It’s a similar story when it comes to web development. Heck, we even have the term “tech debt” to describe decisions that we know aren’t for the best in the long term.

In fact, I’d say that consideration of the long-term effects is something that should play a bigger part in technical decisions.

When we’re weighing up the pros and cons of using a particular tool, we have a tendency to think in the here and now. How might this help me right now? How might this hinder me right now?

But often a decision that delivers short-term gain may well end up delivering long-term pain.

Alexander Petros describes this succinctly:

Reopen a node repository after 3 months and you’ll find that your project is mired in a flurry of security warnings, backwards-incompatible library “upgrades,” and a frontend framework whose cultural peak was the exact moment you started the project and is now widely considered tech debt.

When I wrote about making the Patterns Day website I described my process as doing it “the long hard stupid way”—a term that Frank coined in a talk he gave a few years back. But perhaps my hands-on approach is only long, hard and stupid in the short time. With each passing year, the codebase will retain a degree of readability and accessibility that I would’ve sacrificed had I depended on automated build processes.

Robin Berjon puts this into the historical perspective of Taylorism and Luddism:

Whenever something is automated, you lose some control over it. Sometimes that loss of control improves your life because exerting control is work, and sometimes it worsens your life because it reduces your autonomy.

Or as Marshall McLuhan put it:

Every extension is also an amputation.

…which is fine as long as the benefits of the extension outweigh the costs of the amputation. My worry is that, when it comes to evaluating technology for building on the web, we aren’t considering the longer-term costs.

Maintenance matters. With the passing of time, maintenance matters more and more.

Maybe we avoid thinking about the long-term costs because it would lead to decision paralysis. That’s understandable. But I take comfort from some words of wisdom on the web from the 1990s. Tim Berners-Lee’s style guide for hypertext:

Because hypertext is potentially unconstrained you are a little daunted. Do not be. You can write a document as simply as you like. In many ways, the simpler the better.

Saturday, October 14th, 2023

I Just Really Don’t Like Automated Phone Systems – cabel.com

Notice how this crappy assemblage of if/else statements repeatedly claims to be “artificial intelligence”—that term really has lost all meaning now.

Wednesday, August 9th, 2023

Automation

I just described prototype code as code to be thrown away. On that topic…

I’ve been observing how people are programming with large language models and I’ve seen a few trends.

The first thing that just about everyone agrees on is that the code produced by a generative tool is not fit for public consumption. At least not straight away. It definitely needs to be checked and tested. If you enjoy debugging and doing code reviews, this might be right up your street.

The other option is to not use these tools for production code at all. Instead use them for throwaway code. That could be prototyping. But it could also be the code for those annoying admin tasks that you don’t do very often.

Take content migration. Say you need to grab a data dump, do some operations on the data to transform it in some way, and then pipe the results into a new content management system.

That’s almost certainly something you’d want to automate with bespoke code. Once the content migration is done, the code can be thrown away.

Read Matt’s account of coding up his Braggoscope. The code needed to spider a thousand web pages, extract data from those pages, find similarities, and output the newly-structured data in a different format.

I’ve noticed that these are just the kind of tasks that large language models are pretty good at. In effect you’re training the tool on your own very specific data and getting it to do your drudge work for you.

To me, it feels right that the usefulness happens on your own machine. You don’t put the machine-generated code in front of other humans.

Saturday, August 5th, 2023

“If It Sounds Like Sci-Fi, It Probably Is”

Emily M. Bender:

I dislike the term because “artificial intelligence” suggests that there’s more going on than there is, that these things are autonomous thinking entities rather than tools and simply kinds of automation. If we focus on them as autonomous thinking entities or we spin out that fantasy, it is easier to lose track of the people in the picture, both the people who should be accountable for what the systems are doing and the people whose labor and data are being exploited to create them in the first place.

Alternative terms:

  • Stochastic parrots
  • Spicy autocomplete
  • Mad Libs
  • Magic Eight Ball

And this is worth shouting from the rooftops:

The threat is not the generative “AI” itself. It’s the way that management might choose to use it.

Tuesday, June 13th, 2023

When I lost my job, I learned to code. Now AI doom mongers are trying to scare me all over again | Tristan Cross | The Guardian

Ingesting every piece of art ever into a machine which lovelessly boils them down to some approximated median result isn’t artistic expression. It may be a neat parlour trick, a fun novelty, but an AI is only able to produce semi-convincing knock-offs of our creations precisely because real, actual people once had the thought, skill and will to create them.