6 Comments

I agree with much of what you write. I'm also very concerned about what happens when we automate away entry level jobs. This is already happening to copywriters, and it's going to happen across a wide variety of industries. We're going to end up with a skills gap, because you can't progress in your career if you don't get a chance to start it. There's actually precedence for this, to whit, the UK geosciences industry in the early 1990s, who ran a 5 year hiring freeze then discovered 10 years later that they had no one ready for middle management.

Another part of the problem is that, in these lower paid roles that are about to be automated away, AI just can't be trusted to get things right, but humans will trust it anyway, for eg, https://www.bbc.co.uk/news/world-us-canada-65735769

However, I can't help point out couple of things: Y2K wasn't a disaster because the tech world saw it coming and did something about it. It absolutely would have been a disaster if they hadn't, but in fact, people poured vast amounts of resources into making sure it wasn't.

The AI killing the drone operator in a simulation didn't happen, it was a thought experiment not a simulation: https://www.businessinsider.com/ai-powered-drone-tried-killing-its-operator-in-military-simulation-2023-6

You make the important point that AI isn't conscious but its lack of consciousness doesn't make it not a threat, then you go on to say "it simply has no empathy". An AI has no empathy in the same way that a brick has no empathy. This might be nit-picking, but given so many people are using anthropomorphic language that implies sentience and emotion, I think it's really important to avoid that altogether if we're to get AI into perspective. AI can't understand social norms because it can't understand anything. It has no theory of mind, ergo it cannot understand or know or realise or decide anything, it just spits out what it has calculated to be the most likely answer.

AI – though I'd rather call it something like generative computing because one thing it's not is intelligent – can do significant harm, but I doubt it's going to do it directly to us. More likely (in fact, almost certainly), it's going to upend economies and societies in ways we don't yet understand but which will have far-reaching and deep implications for how we live. And it's going to hit hardest in late-stage capitalistic societies where more businesses are run on a more exploitative model. Watch out, America.

Expand full comment
author

Thank you for the counterpoints! I'm personally very suspicious of "clarifying" denials by the US military, but you're absolutely right that I should reference it. I will add a note.

And your point about people mistakenly relying on AI in their work is timely. I've had to deal with that recently at work. In fact, part of this piece came from a short slide deck I created to bring some folks up to speed on best practices.

I'm surprised you don't want to call these machines intelligent. They're not conscious. They're not creative. The colloquial definition of intelligence definitely implies something these machines don't have. But numeracy is a kind of intelligence, as is the emotional intelligence of dogs. There are people with extreme forms of autism who are practically shut in their minds and worse at reading emotions than both dogs and the facial recognition software on your phone. I'm not sure we'd say they weren't intelligent.

We need a word that captures the difference between a calculator and a cotton gin, just as we need one that captures the difference between a calculator and a neural network. I have no attachment to the word intelligent. Perhaps it carries too much other baggage. But if so, we would need something with a very similar meaning.

Expand full comment

Yeah, I too retain some suspicion of clarifying statements from any military, however that particular story stank to high heaven, so I'm minded to believe the clarification.

I utterly despise the term "artificial intelligence", I think that a lot of people think intelligence is sentience, and tbh, when you try to come up with watertight definitions of either of those words, you just can't. So why use them? And tt's a complete red herring to compare LLMs or other generative computing software to dogs, and frankly it's a bit insulting to compare them to people with autism. That's not apples to oranges, it's bricks to oranges.

What we don't have here are intelligent machines; what we do have is generative software. The machine itself is irrelevant – you can run an LLM on a phone but it doesn't make that phone intelligent. But when we talk about machines being intelligent we immediately anthropomorphise and start thinking of them as akin to the androids, sentient or otherwise, we're familiar with from fiction. That leads us astray.

We need to think very, very carefully about the language we use to discuss these systems, because decades of fiction has primed us for thinking about these things in quite constrained and specific ways. And that risks us completely missing the bigger picture because we've artificially and needlessly limited our thinking. So let's call a spade a spade. Terms that are realistic and don't have connotations of intelligence or sentience are preferable to more anthropomorphic terms. Large language models, generative computing, image generation software, etc. etc. Because so, so much of the utter bilge that has been published about these programs has its roots in misunderstanding encouraged by sloppy language.

Expand full comment
author

Edits complete. Thank you.

Expand full comment

If I helped a little, I'm happy!

Expand full comment
author

I started in the biomedical field, and everyone uses similar languages with genes, which are of course a different kind of code. As I said, I have no particular attachment to a label. I think there is some equivocation going on in your reply, but perhaps that's another reason to push for linguistic orthodoxy. I wish you the best of luck!

Expand full comment