( Eight minute read.)
I ask this question because, what happens when we share the planet with self-aware, self-improving machines that evolve beyond our ability to control or understand?
What sort of future do you want?
Should we develop lethal autonomous weapons?
What would you like to happen with job automation?
What career advice would you give today’s kids?
Would you prefer new jobs replacing the old ones, or a jobless society where everyone enjoys a life of leisure and machine-produced wealth?
Further down the road, would you like us to create super intelligent life and spread it through our cosmos?
Will we control intelligent machines or will they control us?
Will intelligent machines replace us, coexist with us, or merge with us?
What will it mean to be human in the age of artificial intelligence?
What would you like it to mean, and how can we make the future be that way?
I have lost count of how many similar articles I have seen.
Typically, these articles are accompanied by an evil-looking robot carrying a weapon, and they suggest we should worry about robots rising up and killing us because they’ve become conscious and/or evil.
In fact, the main concern of the beneficial-AI movement isn’t with robots but with intelligence itself: specifically, intelligence whose goals are misaligned with ours.
To cause us trouble, such misaligned superhuman intelligence needs no robotic body, merely an internet connection – this may enable out smarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand.
Because AI has the potential to become more intelligent than any human, we have no surefire way of predicting how it will behave.
Even if building robots were physically impossible, a super-intelligent and super-wealthy AI could easily pay or manipulate many humans to unwittingly do its bidding.
Civilization will flourish as long as we win the race between the growing power of technology and the wisdom with which we manage it.
In the case of AI technology, the best way to win that race is not to impede the former, but to accelerate the latter, by supporting AI safety research.
We, that is all of us are so distracted by technology, that we are blind to what is happening with Artificial Intelligence.
WE MUST ESTABLISH A WORLD GOVERNING BODY THAT GIVES ALL FORMS OF ARTIFICIAL INTELLIGENCE A CERTIFICATE OF HEALTH, ESPECIALLY ALL ALGORITHMS THAT PURSUE PROFIT.
The quest for strong AI would ultimately succeed was long thought of as science fiction, centuries or more away.
However, thanks to recent breakthroughs, many AI milestones, which experts viewed as decades away merely five years ago, have now been reached, making many experts take seriously the possibility of super intelligence in our lifetime.
While some experts still guess that human-level AI is centuries away.
With all the wonderful attributes that humans have display , (since we feel out of the trees) for exploration is it not just plain bonkers that we allow the development of Artificial Intelligence to proceed on a willy nilly bases.
It’s smart to start safety research now to prepare for the eventuality.
Many of the safety problems associated with human-level AI are so hard that they may take decades to solve.
So it’s prudent to start researching them now rather than the night before some programmers drinking Red Bull decide to switch one on.
It may be that media have made the AI safety debate seem more controversial than it really is. After all, fear sells, and articles using out-of-context quotes to proclaim imminent doom can generate more clicks than nuanced and balanced ones.
However, physicists know that a brain consists of quarks and electrons arranged to act as a powerful computer, and that there’s no law of physics preventing us from building even more intelligent quark blobs.
You could say that it’s still at least decades away.
We have all walked out of a cinema after viewing a futuristic movie that has had either large floating cities, with space crafts hovering, departing or landing all with swishing doors and hologram screens showing 3D images of the universe.
All controlled by a robot with super artificial intelligence systems that either intentionally or unintentionally cause great harm.
Of course none of this keeps you awake at night because machines can’t have goals!
Wrong:
Machines can obviously have goals in the narrow sense of exhibiting goal-oriented behavior:
The behavior of a heat-seeking missile is most economically explained as a goal to hit a target. If you feel threatened by a machine whose goals are misaligned with yours, then it is precisely its goals in this narrow sense that troubles you, not whether the machine is conscious and experiences a sense of purpose. If that heat-seeking missile were chasing you, you probably wouldn’t sleep well.
Take it a step further:
An AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation.
All of this is in the far unseeable future and the images are a fantasy of the mind.
Not for much longer:
If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for.
A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we have a problem.
The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult.
Some experts have expressed concern, though, that it might also be the last, unless we learn to align the goals of the AI with ours before it becomes super intelligent.
If a super intelligent system is tasked with an ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.
It could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind.
In the long-term, an important question is what will happen if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks.
Super intelligent AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent.
But rest assured AI will show no subjective feelings to the biggest event in human history.
It’s time to take this conversation beyond a few hundred technology sector insiders.
As far as our future is concerned, the narrow domains we yield to computers are not all created equal. Some areas are likely to have a much bigger impact than others.
All the things we humans value (love, happiness, even survival) are important to us because we have particular evolutionary history – a history we share with higher animals, but not with computer programs, such as artificial intelligences.
We don’t yet know exactly what makes human thought different from current generation of machine learning algorithms, for one thing, so we don’t know the size of the gap between the fixed bar and the rising curve. I am not saying that we are all going to be wiped out in the near future by some deranged machine or program.
I am saying that a good first step, would be to stop treating intelligent machines as the stuff of science fiction, and start thinking of them as a part of the reality that we or our descendants may actually confront, sooner or later.
If we dont want a world run by Google Knowledge, Facebook, Amazon, Twitter, that have their AI brain in the cloud and will without a doubt move from the legacy world of retrospective data analysis to one in which systems make inferences and predictions, to intent and desire in real time.
All of this only touches the surface of the issues and difficulties that lie ahead.
It isn’t just about making things easier, it will touch and is touching every aspect of our personnel and public lives, which is why we need to thinks carefully and ethically about how we apply, build, test, govern, and experience machine intelligence.
In the end we cannot leave the above to the market place, to the government’s, to the United Nations, or the Scientific world.
We must have an totally independent, transparent, legally responsible, fully funded World Organisation, called for instance; Click World OK where all AI programs are examined and given a World Health Certificate.
You would be right to ask who would fund this Organisation.
Every country would be asked to make a donation. These donations would be repayable by the Organisation placing a World Aid commission on all profit making programs.
This can only be achieved by all of us demanding so.
All comments appreciated . All AI like clicks chucked in the bin.