We spent decades considering the possibility that artificial intelligence may pose a threat to humanity and our survival as a species.
I always dismissed these fears of a real-life Skynet with a very simple comparison: did we exterminate Chimpanzees? No: We didn’t, simply because they didn’t pose a threat to us, so we decided to just let them be.
A truly intelligent artificial intelligence, past the so called “singularity”, more intelligent than us and probably capable of evolving at a faster rate than us, would probably see humanity as a non threat, just about like we see Chimpanzees.
In the last few years, as AI studies evolve, it became clear however that before we will achieve a truly intelligent AI, much, much before then, we will be able to create AS: artificially stupid machines that may actually be a danger to human survival.
A machine with enough sensors to know what happens at a large distance, and that is programmed for killing everything that moves on two feet and is warm, would not require a degree of self awareness like what we saw in the Terminator saga, or in I Robot. It would simply require a degree of intelligence only marginally more advanced than what we already install in a self-driving car.
It would not require to be able to tell what language you are speaking, what colour are your eyes, if you are smiling, it would not need to know a pear from a banana, it would not need to know what music is, or how to cook an egg…
This machine doesn’t really need to be artificially intelligent: if it was it would probably be able to make intelligent decisions, this machine is only artificially stupid, and I think that’s what should really concern us right now!