No system exists in a vacuum; any individual intelligence will always be both defined and limited by the context of its existence, by its environment.
- Currently, our environment, not our brain, is acting as the bottleneck to our intelligence.
- The expansion of intelligence can only come from a co-evolution of brains (biological or digital), sensorimotor affordances, environment, and culture — not from merely tuning the gears of some brain in a jar, in isolation. Such a co-evolution has already been happening for eons and will continue as intelligence moves to an increasingly digital substrate. No “intelligence explosion” will occur, as this process advances at a roughly linear pace.
According to Prof Yuval Noah Harari a brain is just a piece of biological tissue, there is nothing intrinsically intelligent about it.
In his latest book, he implies that the superhuman AIs of the future, developed collectively over centuries, will have the capability to develop AI greater than themselves?
I say No, no more than any of us can.
Answering “yes” would fly in the face of everything we know — again, remember that no human, nor any intelligent entity that we know of, has ever designed anything smarter than itself.
Prof Harari (in his book Sapiens) describes how wheat with zero intelligence came to con humanity into providing it with its needs, which implies that humans had zero intelligence.
However, I say that you cannot dissociate intelligence from the context in which it expresses itself. The intelligence of an octopus is specialized in the problem of being an octopus. The intelligence of humans is specialized in the problem of being human.
In his latest book and lectures, he explores the possibility of AI combining with data and genome to create the first ultra trained intelligent machine leading to digital dictatorship.
The basic premise is that, in the near future, a first “seed AI” will be created, with general problem-solving abilities slightly surpassing that of humans. This seed AI would start designing better AIs, initiating a recursive self-improvement loop that would immediately leave human intelligence in the dust, overtaking it by orders of magnitude in a short time.
I say it will be the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.
He also states that AI is a major risk, greater than nuclear war or climate change.
AI, however, considers “intelligence” in a completely abstract way, disconnected from its context, and ignores available evidence about both intelligent systems and recursively self-improving systems.
This narrative contributes to the dangerously misleading public debate that is ongoing about the risks of AI and the need for AI regulation.
What are we talking about when we talk about intelligence?
Precisely defining intelligence is in itself a challenge.
The intelligence explosion narrative equates intelligence with the general problem-solving ability displayed by individual intelligent agents — by current human brains, or future electronic brains.
Intelligence expansion can only come from a co-evolution of the mind, its sensorimotor modalities, and its environment.
Intelligence is not a superpower; exceptional intelligence does not, on its own, confer you with proportionally exceptional power over your circumstances.
Our environment, which determines how our intelligence manifests itself, puts a hard limit on what we can do with our brains — on how intelligent we can grow up to be, on how effectively we can leverage the intelligence that we develop, on what problems we can solve.
Our biological brains are just a small part of our whole intelligence.
These days cognitive prosthetics surround us, plugging into our brain and extending its problem-solving capabilities. Your smartphone. Your laptop. Google search. The cognitive tools your were gifted in school. Books. Other people. Mathematical notation. Programming.
However the most fundamental of all cognitive prosthetics is of course language itself — essentially an operating system for cognition, without which we couldn’t think very far.
These things are not merely knowledge to be fed to the brain and used by it, they are literally external cognitive processes, non-biological ways to run threads of thought and problem-solving algorithms — across time, space, and importantly, across individuality.
It is civilization as a whole that will create superhuman AI, not you, nor me, nor any individual. A process involving countless humans, over timescales we can barely comprehend. Transcending what we are now, much like it has transcended what we were 10,000 years ago. It’s a gradual process, not a sudden shift.
Civilization will develop AI, and just march on to be ruled by an oligarchy of two or three large, general-purpose cloud-based commercial bits of software.
This is why we need to be sure that the decision logic that we programme into systems is what we perceive to be ethical. If not we will have a world full of schizophrenia.
Of course, the sensors will have to actually detect the world as it is.
Cognitive prosthetics, not our brains, will be where most of our cognitive abilities reside.
However, man cannot get rid of his body even if he throws it away. There can be no absolute transcendence of the species role while man lives.
In this case, you may ask, isn’t civilization itself the runaway self-improving brain?
Is our civilizational intelligence exploding? No.
Unless we are talking here about immortality one is merely talking about an intensification of the character defenses and superstitions of man.
These artificially intelligent systems never perform the same way twice, even under the exact same conditions, so how do we test that? How do we know there are any guarantees of safety? This is going to become a thornier issue as we go forward.
All human comments appreciated. All like clicks chucked in the bin.