We describe as singularity the moment when A.I. will outperform human intelligence. The concept is of strong significance because if human intelligence is capable of creating A.I., anything that outperforms human intelligence should by definition be able to improve A.I., and this would cause an iterative chain effect that will bring A.I. to a demigod-like status in a relatively short time.
There is one catch tho: to define outperformance, or improvement over something, we should be capable of measuring and comparing it first. I am not really wondering if we are, I am arguing -instead- that it is completely impossible.
Let me explain: anyone can instinctively tell who’s more intelligent between a butterfly and a dog, or between the average 5th grader and Albert Einstein. But if we were to choose between the world’s best architect and the world’s best psychologist, we would probably raise quite a few disagreements along the way.
So it could happen that one day a computer could outperform a category but not another… actually… as a matter of fact it already happened: the best world’s chess player lost to a computer in 1996, he probably wouldn’t hold a candle to today’s technology.
So how can we realistically talk about one -single- singularity?
Even if we focus on the significance of an “ultimate” singularity as bond to a machine’s ability to outperform humans at programming and/or engineering so to improve itself better than any humans could, things would only be marginally simpler:
- first, programming/engineering are particularly difficult tasks that not many humans can master in their lifetime anyway. Before that skill is outperformed, there will be probably a multitude of micro-singularities where the machines will outperform one by one cashiers, drivers, cooks, soldiers just to name a few…
- Second, how can we be sure a better-than-engineer machine would ever be capable of outperforming a doctor for example? There is no guarantee that the simple skill of being capable of self-programming, will make a machine capable of outperforming humans in other tasks too. Simply put, sometimes there is no way to get a bowl of rice just by keep refining a bowl of wood chips.
What seems important is that until now we dismissed each one of the microsingularities that emerged as “just another calculator”, or “just another car”: the fact that a calculator is faster at math, or a car is faster at moving, won’t change their nature of being dumb as a brick without an intelligent operator.
The emergence of a truly independent self-driving car is nothing like a calculator, and the emergence of an anthropomorphic self reliant cook (or soldier) would be even less so.
I believe introducing the idea of multisingularity is a handy artefact at this point in our history as it is one that allows us to measure progresses of AI keeping it real and relevant rather than hidden in a far future.
It also allows us to enumerate which skills are being outperformed and made redundant through time, and ultimately be ready for when our own self (yes, me, and yes, you too) will be in that position.
Domesticating A.I. is just one more step in our civilization, one that if done well will maintain the promise of freeing up humanity from work, one redundancy at a time.
It won’t likely be the end of humanity tho… finding a new purpose is going to be an engaging task both at an individual level, and as a society.