( A thirty minute read)
The applications for robot technology patents has tripled within a decade. Last year nearly a quarter million robots were sold worldwide, a record according to the International Federation of Robotics.
There is apparently no end in sight for the growth, and worldwide, it could mean as many as 2.3 million in operation by 2018 – twice as many as there were in 2009.
As with many changes driven by technology, there is no question if but when we will see the first applications in our daily lives.
It can be easy to only focus on our material success versus the
deeper aspects of what makes us human. The unknown question is just what is the Technological Revolution doing to all of us.
If we don’t know ourselves, how will machines know what we value.
If we don’t find good defenses against exploiting Algorithms there will come a time when machine learning Algorithms will make not just us but the whole world weep.
Yes the world could and should strive to develop technology that take hundred of millions out of poverty, to reduce our reliance of cheap carbon-based fossil fuels, to reverse climate change, to conquer cancer etc.
However the ultimate barriers to achieving a decent life for all, is neither technological nor environmental, it is our unwillingness to share.
As I have posted in many previous posts there is only one way to achieve sharing. We must place a world aid commission of 0.05% on all items that seek profit for profit sake. ( See previous posts)
Unfortunately although this is possible to achieve with technology, it will never happen due to our, ” I am all right jack world.”
One way or the other, it is time we started to ask the questions.
What respective places for public research and private research are there?
What kinds of cooperation exist between the two sectors?
What are the priorities for investment in artificial intelligence
research?
What ethical, legal, and policy principles should guide these new technologies?
And finally, should regulation take place at the national, EU, or international level?
Why should we be asking these questions?
Because: We don’t realize, ( WITH THE WOEFUL STATE OF GEOPOLITICS – LAWMAKERS, POLITICIANS) what damage social media and its profit algorithms are currently inflicting on Society.
Because: AI is the CATALYSIS FOR A MASSIVE PANDORA’S BOX: and we will need to come to terms with it.
Because: Social media platforms allow individuals to reach thousands of people via a single post, making their views readily accessible to a potentially vast audience.
Because: The computer revolution is over.
Because: Now is a good time to start paying attention.
For now, there are many more questions than answers.
For Instance :
WE ARE ONLY BEGINNING TO SKIM THE SURFACE OF WHAT SORT OF PROBLEMS OR OPPORTUNITIES AI IS POSING TO ALL OF US.
So what is the legal definition of “smart autonomous robots”
Is it an industrial robots installed on factory floors, carrying out repetitive tasks.
Is it professional service robots used outside traditional manufacturing like surgical robots in hospitals or milking robots on farms.
Is it consumer robots like vacuum cleaners, lawn mowers etc.
Is it software-based AI to help doctors improve their diagnosis or in recommendation systems on shopping websites.
Is it sophisticated sensors AI-based software, increasingly used to make all kinds of devices and objects around us intelligent.
Non of these vision given the impact on our society and economy, address any of the very profound ethical questions.
FOR INSTANCE:
THE LEGAL CHALLENGES.
The next major way in which social media will change the court system will relate to its impact on court procedure and the law. The impact of the Internet on traditional legal principles, law research and case management.
WHO IS GOING TO MAKE A CONTRACT WITH A MACHINE THAT IS DRIVEN BY A SELF LEARNING ALGORITHM.
WHO IS GOING TO BE RESPONSIBLE WHEN A SELF DRIVE CAR KILLS SOMEONE OR A MERIDIAN OF OTHER LEGAL POSSIBILITIES.
It is clear, however, that the European Parliament is making inroads towards taking an AI-centric future seriously. The European Parliament Legal Affairs Committee recently presented a report on civil law rules on robotics. A mandatory insurance has been suggested by EU MPs, which would say that the manufacturer of the autonomous robot needs to arrange insurance, against any ill effects of their creations.
Last month, in a 17-2 vote, the parliament’s legal affairs committee voted to begin drafting a set of regulations to govern the development and use of artificial intelligence and robotics. To establish an European agency for AI and robotics, a registration system for the most advanced ones, and a mandatory insurance scheme for companies to cover damage and harm caused by robots.
(This report is very timely and points at some crucial issues that need to be addressed. e.g. to enforce ethical standards or establish liability for accidents involving driver less cars.)
They have set up SPARC, the Public-Private Partnership for robotics in Europe, to develop a robotics strategy for Europe. With €700 million EU funding and, adding private investment, an overall investment of €2.8 billion, SPARC is by far the biggest civilian research program in this area in the world.
To my mind the Regulation and Registration of Profit Algorithms is essential, before we find ethical theories turning into decision procedures, even algorithms.
The prospect of reducing ethics to a logically consistent principle or set of laws is suspect, given the complex intuitions people have about right and wrong.
Trust and cooperation cannot be built by the dogmatic imposition of
one framework over another or through the rigid application of one view
of what is ethically “correct.” Rather, they require the capacity to see the
other’s point of view.
Perhaps one might have come to a similar conclusion through just thinking
about the moral decision-making of humans, irrespective of autonomous
machines.
However, reflection on a comprehensive approach toward teaching robots right from wrong has demanded attention to aspects of moral decision – making that people normally take for granted in their daily, frequently less-than-perfect attempts to behave ethically toward each other
Humans have always looked around for company in the universe.
Their long fascination with nonhuman animals derives from the fact that animals are the things most similar to them. The similarities and the differences tell humans much about who and what they are.
As AMAs become more sophisticated, they will come to play a corresponding role as they reflect humans’ values. For humanity’s understanding of ethics, there can be no more important development.
It seems to me that over the past forty years or so that as technology has increased exponentially people in general terms do not seem to feel better about their lives and may even feel worse because they aren’t reaching the levels they had hoped to achieve.
Even if you discount the utopian and dystopian hyperbole, the 21st century will be defined not just by advances in artificial intelligence, robotics, computing and cognitive neuroscience, but how we manage them.
With each new advancement in AI and robotics, we are brought closer to a reckoning not just with ourselves, but over whether our laws, legal concepts, and the historical, cultural, social and economic foundations on which they are premised are truly suited to addressing the world as it will be, not as it once was.
The conclusion is that up to now humans have enjoyed the exclusive claim to biological intelligence and all future intelligence must be judged against that benchmark.
Indeed our religious and philosophical beliefs revolve around that we are special.
It is incumbent upon all of us to engage with what is going on, to understand its implications and to begin to reflect on whether efforts such as the European Parliament’s are nothing more than pouring new wine into old wine skins.
There is no science of futurology, but we can better see the future and understand where we might end up in it by focusing more intently on the present and the decisions we have made as society when it comes to technology.
As a society we have made no real democratic decisions about technology, but have more or less been forced to accept that certain things enter our world and that we must learn to harness their benefits or get left behind, and, of course, that we must deal with their fallout.
Indeed, AI has over promised in the past, and therefore any decision should be based on factual information rather than unrealistic expectations from the technology.
These are only some of the issues that AI Algorithms present.
Prioritizing Human well-being in the Age of Artificial Intelligence is for me what it is all about.
In a world that is heading rapidly to where we can’t think for ourselves that is already plagued by tweets that are both malicious and false, should robotic copies of humans have human rights.
SOME WILL SAY YES, BECAUSE IF THEY ARE INTELLIGENT AS US THEY HAVE A LEGAL RIGHT.
But who, is responsible for robotic devices capable of killing – should the Laws of War change?
WHO should be allowed to vote. If a robot is the property of its “owner” should they have any greater moral claim to a vote than say, your cat?
HOW OR WHO IS GOING TO BE RESPONSIBLE FOR ALGORITHMS THAT ARE SOLELY PROFIT SEEKING OR RACIALLY BIAS.
OUR LEGAL SYSTEMS ARE ALREADY STRUGGLING WITH SOCIAL MEDIA.
The use of social media is having an adverse impact on the administration of justice in relation to the fairness of criminal trials, the right to anonymity and the integrity of judicial orders in criminal proceedings.
The principal problem for courts is not the technology of social media, but (i) how the powerful tools it offers are redefining interactive communications between courts and the public, and (ii) how most courts, apart from those few on the cutting edge, are being compelled to respond to this constantly evolving electronic interactive communications platform,
sometimes against their will.
Electronically-based communication will not only affect how proceedings are case managed and run; it also will have an impact on judgment style and publishing, as judgments become available to a global social media audience.
Social media also may foster changes in certain legal principles and causes of action. There will be new crimes and torts, discovery and court
management issues, and new courtroom set-ups – perhaps even “virtual” ones.
Social media’s impact on the court is not simply as a new means for publishing judgments and information, but also on how judges and courts perform their activities in an electronically-connected community where the users of the system can, and will, respond directly to how justice is being administered.
The fundamental right to a fair trial does not change in the face of any new means of communication.
Rules can and must reflect the new reality of same.
Although social media use is commonplace in business and homes, it raised questions about its impact on judicial independence and the desirability of judicial or court use of this informal, public, form of communication.
Contempt of Court laws are designed to prevent trial by media, however, are they able to protect against trials by social media?
We are definitely in a different world when social networks are affecting justice.
We also have to contemplate the possibility that responsible jurors not trying to look for anything about a case might just stumble upon commentary if it’s widespread enough in their normal social media usage, and that’s the world in which we now live and the world we have to deal with.
It is a great tool for the mass dissemination of information but it is also a tool for spreading false information, false claims.
We need to strike a balance between the rights of the individual to express their views via social media and the protection of fairness in criminal proceedings.
- ‘Who, when, what’ guidelines to be developed for using social media in courtrooms
The justice system must “catch up with the modern world”
In Australia and in New Zealand they have set up social media accounts, allowed social media reports of court proceedings and dealt with the tender of social media evidence in a wide range of civil and criminal proceedings.
Setting up Court Twitter/Facebook account seems straightforward; what sort of organization would refuse to be part of a means of communication used by everyone else? But it leads to the next issue the courts must determine, namely whether managerial techniques appropriate to other parts of the public sector are appropriate for courts.
Are the judgments of courts part of the community’s business and social activities in which the service user has a say, or is the court’s role “part of a broader discourse by which a society and polity affirm its core values, apply them and adapt them to changing circumstances” in a manner which is without parallel to other parts of the public sector?
Mobile computing and wireless technology.
• Interconnectivity, notably ‘the Internet of Things’ and cloud computing.
• “Big data” analysis (e.g. the use of “predictive coding” in discovery).
• Electronic records management systems (“ERMS”) for retention of electronically stored information (“ESI”).
It is unlikely that the search and social media giants are going to change their indexing and ranking procedures anytime soon.
It is easy to see how people may become confused thinking that robots express emotions whereas they are actually machines and do not have any feeling.
If we can’t stop its progress we’d better be involved in it to ensure it is not done on the conditions of others based on their values.
Somebody is paying for the development of robotics, so the system must be something that gives them a legal certainty.
“Is everything that is feasible also desirable and how can we avoid
unintended consequences of robotics and Artificial Intelligence?”
The sooner we require all AI programs to be vetted, and registered the better. By doing this the notion of liability must evolve to best define accountability for a robot, its operator, and software algorithms.
The shady (indeed illegal) nature of the businesses which created social media, (as well as most other 20th century communications developments) the security risks and the interactive nature of social media render its use by courts, and in particular by judges, a two-edged sword.
A search for “global warming,” for example, may reveal different results for different users depending on which websites are bookmarked, which political blogs are visited, or even what groups the users belong to on Facebook.
Robotics and Artificial intelligence are the cornerstone technologies with Google, Amazon, Facebook – everyone is jumping onto artificial intelligence at the moment. The line was between what you could say and what you couldn’t not any more in the full glare of the new social media world.
Google’s enormous legal resources and documenting their scepticism in response to court-imposed judgments and services is a case in kind.
Justice by algorithm.
Robots who can interact with humans in different roles. With their programmed empathy,they say “information is power”. This is why transparency is something that so many seek. The biggest roadblocks will come from those who have created and benefited from their systems
I am not a technologist. Neither a law keeper.
Algorithms for profit are creating an imbalance in the society, where each person begins to seek justice individually, according to their personal understanding, instead of shared values and beliefs. That will be a dangerous society to live in.
All human comments appreciated. All like clicks chucked in the bin.