Is AI a threat to humanity?

Tech thinkers warn of AI dangers at Bletchley Park Summit

The Bletchley Park AI Summit kicked off on Wednesday with Elon Musk calling AI a threat to humanity and Mustafa Suleyman, the co-founder of DeepMind, saying we might need to ‘pause’ AI development. Even dear old King Charles had his tuppence’ worth. But it’s a relief to hear ChatGPT doesn’t yet pose a risk of catastrophic harm… Phew! I guess we can keep calm and carry on playing PastMaster

One of the things we ask guests on PastMaster is where they sit on the AI doom spectrum. One end being full Skynet Judgement Day, with robot Arnie chasing down humanity’s saviour, and the other being a utopia where machines solve our medical and environmental problems much more quickly and efficiently than humans ever could.

I listened to an interview with Mustafa Suleyman recently in which he warned that in the wrong hands AI could be used to create chemical weapons.

In another interview Dr Louis Rosenberg, chief scientist of Unanimous AI, explained that technology can now capture our movements, for example from data recorded by virtual reality software, and analyse it to predict not just health conditions but our likely political leanings. This is something people are just not aware of, he said. Too right.

On the radio this morning Amol Rajan was probing away, asking how AI is already affecting us. The answer often lies in algorithms that flag up when benefits should be cut off or where credit should be declined, for example.

It feels to me as though PastMaster isn’t at either end of the AI doom spectrum. Having a laugh with a silly programme is more fun than worrying. But if ChatGPT’s ability to run a light-hearted roleplaying game is questionable, I’m dubious about its ability to facilitate chaos in the world of humans. It must be other AI that people like Musk and Suleyman are talking about…

That said, I would love to have a better view of what’s happening under the bonnet of GPT. It must be more than the aggregation and distillation of what word is most likely to follow another. What does the programming involve that injects morality into the software? By mucking around with our instructions to GPT we’ve got it to swear. To break its moral code. How easy or hard can it be to get it to break its moral code on proper bad stuff, like writing an effective plan to cause terror or damage?

Hey ho. Our next episode of PastMaster drops on Monday 6 November and it stars the very funny Nick Horseman. So I’m signing off this post with a swing back into the positive end of the AI doom spectrum.