( A seven to eight minute read)
There are a lot of things that can go and have gone wrong throughout history — earthquakes and wars and plagues and whatnot.
The present state of our planet does not have to be highlighted by me in this post but a major change is coming, over unknown timescales but across every segment of society, and the people playing a part in that transition have a huge responsibility and opportunity to shape it for the best.
What is triggering this change? Artificial intelligence.
Although most of us are unaware of it, AI systems are everywhere, from bank apps that let us deposit checks with a picture, to everyone’s favorite Snapchat filter, to our handheld mobile assistants.
While many countries’ laws are deficient in terms of artificial intelligence (“AI”) – which is defined as the simulation of human intelligence processes by computer systems and other machines,should we ignore the risks of any technology and not take precautions?
Until we have concrete evidence to confirm what an AI can someday achieve, it’s safer to assume that there are no upper limits – that is, for now, anything is possible and we need to plan accordingly.
How do we prepare for an AI more intelligent than we can imagine?
We can imagine all sorts of catastrophic risks from AI or robotics or genetic engineering. The task of developing this technology therefore calls for extraordinary care.
Since machine learning is at the core of pretty much every AI success story, it’s really important for us to be able to understand *what* it is that the machine learned.
I think it’s really important for us to develop techniques so machines can explain what they learned so humans can validate that understanding. …
Of course we are all so preoccupied with our own lives, turning a blind eye to the world of technology. Given the expectation that advanced AI will far surpass any technology seen to date — and possibly surpass even human intelligence we do this at our peril.
Even if these things are still far off and we’re not clear if we’ll ever reach them, even with a small probability of a very high consequence we should be serious about these issues.
Staving off future catastrophes (assuming that is possible) would bring far more benefit to far greater numbers of people than solving present-day problems such as cancer or extreme poverty.
Regulation and control instruments need to be thought of and established beforehand, not subsequent to the invention a powerful AGI.
There is no governing world body with the goal of keeping AI’s impact on society beneficial, to vet and hold those how create software, responsible.
Human history is rife with learning from mistakes, but in the case of the catastrophic and existential risks that AI could present, we can’t allow for error – but how can we plan for problems we don’t know how to anticipate? AI safety research is critical to identifying unknown unknowns, but is there more the AI community or the rest of society can do to help mitigate potential risks?
When AI becomes very general and very powerful, aligning it with human interests will be challenging. If we fail, AI could plausibly become an existential risk for humanity.
Automation threatens millions of jobs and this is only the beginning. It’s important to remember that AI is a tool and, as such, not inherently good or bad. As with any other technology or tool, there could be unintended consequences.
If I were ranking the existential threats facing us, than runaway ‘superintelligence’ would not even be in the top 10. It is a second half of the 21st century problem with its seeds being sown now.
We don’t know what the future of artificial intelligence will look like. However if we allow it to exploit us and the planet for Greed and profit we are the same as Og standing in front of his cave.
Almost every sector of society is feeling the headwinds of the digital revolution and it is hard to find sectors where robots or technology cannot take humans’ jobs.
The immaturity of our conduct is mind-boggling. AI is not just a cool gadget or a nifty little thing called a smart phone, an I Pad, Google, Facebook, Twitter, or the Internet of Every think run by system’s with no autonomy.
The mismatch between the power of our playthings and the benefits they may well impart now and in the future should not be up for negotiation. As long as we manage to keep the technology beneficial to all and not to itself, learning human values and doing things humans would consider good upon sufficient reflection –
We might avoid a world not worth living in. It’s not profitable to discuss the might have been.
Now is the time to make all technology responsible to our core values.
The extraordinary promise of machine intelligence will be worthless if we do not understand what it learned.
The real threat with advances in AI will stem from our failure to create a policy framework for emerging technology.
AI is not simply an extension of our culture and values. “The problems and solutions are us. AI enhances human power — it’s just a way of making us smarter, of letting us know more things sooner.”
AGI system that are not task-directed, without a defined goal are a nightmare scenario.
One can imagine [AI] outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand, that chose their own targets and cannot be recalled.
It is unlikely that we will enter a dystopian future where AI is held responsible for its own actions > given personhood and hauled into court.
Oversight is called for because, over the coming decades, AI equipped (‘smart’) machines will increasingly acquire two unique attributes: some degree of autonomous decision-making and the ability to learn from experience. As a result, over time, smart machines could stray from their programmers’ instruction further than happens at present.
AI use will need some kind of oversight but hardly a regulatory regime.
Why?
Because AI by design is artificial, and thus ideas such as liability or a jury of peers appears meaningless.
It will be impossible to control.
Because with reinforcement learning AI integrated with more hardware and software solutions and there is no legal system that can treat reinforcement learning? Whether and when a machine can have intent is more a metaphysical question than a legal or scientific one, and it is difficult to define “goal” in a manner that avoids requirements pertaining to intent and self-awareness without creating an over-inclusive definition.
The world will need to adopt a standard for AI where the manufacturers and developers agree to abide by general ethical guidelines, such as through a technical standard mandated by treaty or international regulation. And this standard will be applied only when it is foreseeable that the algorithms and data can cause harm.
Meanwhile the complexity behind the creation of the AI, when paired with the automation and machine learning of the system, could make it difficult to determine who is at fault if something catastrophic goes wrong.
However if we create a world organization that bans the production of uncertified AI systems, it will provide a strong incentive for AI developers to incorporate safety features and internalize the external costs that AI systems generate
For such an Organisation to actually have any power, we will most likely need some sort of government interference
It is vital that careful scrutiny of the ethical, legal and societal dimensions of artificially intelligent systems begins now.”Our smartphones are increasingly giving us advice and directions based on their best Internet searches.
Regardless, we are entering an era where we will rely upon autonomous and learning machines to perform an ever-increasing variety of tasks. At some point, the legal system will have to decide what to do when those machines cause harm and whether direct regulation would be a desirable way to reduce such harm.
It will be very interesting to see the position being taken by the insurance industry in relation to AI and robotics as both the technology and the law develops.
(Once the parties bearing the ultimate responsibility have been identified, their liability should be proportional to the actual level of instructions given to the robot and of its degree of autonomy.)
Increasingly capable and ubiquitous AI systems will have a huge effect on society over the coming decades.
Deal more comprehensively with AI cannot be let to itself, to the free marketplace, to anyone set of values, to anyone country, to anyone of the Tech monopolies Company, to anyone obsolete world organisation, to anyone algorithms, to anyone post.
There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
I for one have no ambition to live in a world run by GOOGLE.
All comments and suggestions welcome. All like clicks chucked in the Bin.
Unfortunately like most problems in the world, GREED’ drives just about everything.