6 Comments

I agree with much of what you write. I'm also very concerned about what happens when we automate away entry level jobs. This is already happening to copywriters, and it's going to happen across a wide variety of industries. We're going to end up with a skills gap, because you can't progress in your career if you don't get a chance to start it. There's actually precedence for this, to whit, the UK geosciences industry in the early 1990s, who ran a 5 year hiring freeze then discovered 10 years later that they had no one ready for middle management.

Another part of the problem is that, in these lower paid roles that are about to be automated away, AI just can't be trusted to get things right, but humans will trust it anyway, for eg, https://www.bbc.co.uk/news/world-us-canada-65735769

However, I can't help point out couple of things: Y2K wasn't a disaster because the tech world saw it coming and did something about it. It absolutely would have been a disaster if they hadn't, but in fact, people poured vast amounts of resources into making sure it wasn't.

The AI killing the drone operator in a simulation didn't happen, it was a thought experiment not a simulation: https://www.businessinsider.com/ai-powered-drone-tried-killing-its-operator-in-military-simulation-2023-6

You make the important point that AI isn't conscious but its lack of consciousness doesn't make it not a threat, then you go on to say "it simply has no empathy". An AI has no empathy in the same way that a brick has no empathy. This might be nit-picking, but given so many people are using anthropomorphic language that implies sentience and emotion, I think it's really important to avoid that altogether if we're to get AI into perspective. AI can't understand social norms because it can't understand anything. It has no theory of mind, ergo it cannot understand or know or realise or decide anything, it just spits out what it has calculated to be the most likely answer.

AI – though I'd rather call it something like generative computing because one thing it's not is intelligent – can do significant harm, but I doubt it's going to do it directly to us. More likely (in fact, almost certainly), it's going to upend economies and societies in ways we don't yet understand but which will have far-reaching and deep implications for how we live. And it's going to hit hardest in late-stage capitalistic societies where more businesses are run on a more exploitative model. Watch out, America.

Expand full comment