Tags

, , , , , , , ,

 

( Twelve minute read for all programmers, code writers.)

I think most people are worrying about the wrong things when they worry about Robots and AI.Résultat de recherche d'images pour "pictures of legal robots"

However with AI and robotics positioned to impact all areas of society, we are remiss not to set things in motion now to prepare for a very different world in the future.

The danger is not AI itself but rather what people do with the AI. The repercussions of AI technology is going to be profound, limited by biological evolution we will be unable to keep up.

So we were all making a very basic mistake when it comes to ARTIFICIAL INTELLIGENCE.

Like every advance in technology AI has the potential to do amazing things, on the other hand it also has the potential to do dangerous things and there is little that can be done to stop or rectify it once it’s unleashed. For example its use in weaponizing.

(Recently I read where it is now almost possible to physically create a computer made of DNA using DNA molecules. A computer that can be programmed to compute anything any other device can process.

Electronic computers are a form of UTM, but no quantum UTM has yet been built, if built it will outperform all standard computers on significant practical problems. This ‘magical’ property is possible because the computer’s processors are made of DNA rather than silicon chips. All electronic computers have a fixed number of chips.

So what?

As DNA molecules are very small a desktop computer could potentially utilize more processors than all the electronic computers in the world combined – and therefore outperform the world’s current fastest supercomputer, while consuming a tiny fraction of its energy.

It will definitely bring about moral and philosophical issues that we should be concerned about right now.)

Back to today:

It’s no longer what or when Artificial Intelligence will change our lives, but how or what and who is going to be help responsible.

We are at a crossroads. We need to make decisions. We must re-invent our future.

It is the role of AI in future, truly hybrid societies, or socio-cognitive-technical systems, that will be the real game changer.

The real potential of AI includes not only the development of intelligent machines and learning robots, but also how these systems influence our social and even biological habits, leading to new forms of organization, perception and interaction.

In other words, AI will extend and therefore change our minds.

Robots are things we build, and so we can pick their goals and behaviours.  Both buyers and builders ought to pick those goals sensibly, but people who will use and buy AI should know what the risks really are.

Understanding human behaviour may be the greatest benefit of artificial intelligence if it helps us find ways to reduce conflict and live sustainably.

However, knowing fully well what an individual person is likely to do in a particular situation is obviously a very, very great power.  Bad applications of this power include the deliberate addiction of customers to a product or service, skewing vote outcomes through disenfranchising some classes of voters by convincing them their votes don’t matter, and even just old-fashioned stalking.

Machines might learn to predict our every move or purchase, or governments might try to put the blame robots for their own unethical policy decisions.

It’s pretty easy to guess when someone will be somewhere these days.

Robots, Artificial Intelligence programs, machine learning, you name it, all seem to be responsible for themselves.

However increasingly our control of machines and devices is delegated, not direct. That fact needs to be at least sufficiently transparent that we can handle the cases when components of  systems our lives depend on go wrong.

In fact, robots belong to us. People, governments and companies build, own and program robots. Whoever owns and operates a robot should be responsible for what it does.Résultat de recherche d'images pour "pictures of legal robots" AI systems must do what we want them to do.

In humans consciousness and ethics are associated with our morality, but that is because of our evolutionary and cultural history.  In artefacts, moral obligation is not tied by either logical or mechanical necessity to awareness or feelings.  This is one of the reasons we shouldn’t make AI responsible: we can’t punish it in a meaningful way, because good AI systems are designed to be modular, so the “pain” of punishment could always be excised, unlike in nature.

We must get over our over-identification with AI systems and start demanding that all Technologies that is not designed for the betterment of humanity and the world we live in be verify AI safe and companies need to make the AI they are inserting in their products visible.

We need a world Organisation that is totally transparent and accountable to VET all technology to ensure that :

To minimise social disruption and maximise social utility.

  • Robots should not be designed as weapons, except for national security reasons.
  • Robots should be designed and operated to comply with existing law, including privacy.
  • Robots are products: as with other products, they should be designed to be safe and secure.
  • Robots are manufactured artefacts: the illusion of emotions and intent should not be used to exploit vulnerable users.
  • It should be possible to find out who is responsible for any robot.
  • Robots should not be human-like because they will necessarily be owned.
  • Robots do not need to have a gender. We should consider how our technology reflects our expectations of gender. Who are the users, and who gets used?
  • We should not creating a legal status for robots that will dub them as “electronic persons,” implying that machines will have legal rights and obligations to fulfill. This means that robots will have to take responsibility for decisions they make, especially if they have autonomy.
  • We should insist on a kill switch for all robots that would shut down all functions if necessary.
  •  We should have restrictions on robots to ensure they obey all commands unless those commands would force them to physically do harm to humans or themselves through action or inaction.
  • We should not use robots to reason about what it means to be human, calling them “human” dehumanize real people.  Worse, it gives people the excuse to blame robots for their actions, when really anything a robot does is entirely our own responsibility.

There are also ethical issues with AI, but they are all the same issues we have with other artifacts we build and value or rely on, such as fine art or sewage plants.

  • Yesterday, the European Parliament’s legal affairs committee voted to pass a report urging the drafting of a set of regulations to govern the use and creation of robots and AI.
  • legal liability may need to be proportionate to its level of autonomy and “education,” with the owners of robots with longer training periods held more responsible for those robots’ actions.
  • A big part of the responsibility also rests on the designers behind these sophisticated machines, with the report suggesting more careful monitoring and transparency. This can be done by providing access to source codes and registration of machines. The forming an ethics committee, where creators might be required to present their designs before they build them.
  • We should have to have a league of programmers dedicated to opposing the misuse of AI technology to exploit people’s natural emotional empathy.

As AI gets better, these issues have gotten more serious.

So to wrap up this blog :

First, here are many reasons not to be worry. However it is not enough for experts to understand the role of AI in society it is also imperative to communicate this understanding to non-experts.

Secondly, we shouldn’t ever be seen as selling our own data, just leasing it for a particular purpose.

This is the model software companies already use for their products; we should just apply the same legal reasoning to we humans.  Then if we have any reason to suspect our data has been used in a way we didn’t approve, we should be able to prosecute.  That is, the applications of our data should be subject to regulations that protect ordinary citizens from the intrusions of governments, corporations and even friends.

These problems are so hard, they might actually be impossible to solve.

But building and using AI is one way we might figure out some answers. If we have tools to help us think, they might make us smarter. And if we have tools that help us understand how we think, that might help us find ways to be happier.

The idea that robots, being authored by us, will always be owned—is completely bonkers. It is the duty of all of us to make AI researchers ensure that the future impact is beneficial, not making robots into others, but accepting them as part of ourselves – as artefacts of our culture rather than as members of our in group.

Unfortunately, it’s easier to get famous and sell robots if you go around pretending that your robot really needs to be loved, or otherwise really is human – or superhuman!

Just because they are shaped like a human and they’d watched Star Wars, passers-by thought it deserved more ethical consideration than they gave homeless people, who were actually people.

Because we build and own robots, we shouldn’t ever want them to be persons.

I can hear you saying that our society faces many hard problems far more pressing than the advance of Artificial intelligence. AI is here now, and even without AI, our hyperconnected socio-technical culture already creates radically new dynamics and challenges for both human society and our environment.

AI and computer science, particularly machine learning but also HCI, are increasingly able to help out research in the social sciences.  Fields that are benefiting include political science, economics, psychology, anthropology and business / marketing. All true but automation causes economic inequality.

Blaming robots is insane, and taxing the robots themselves is insane.

This is insane because no robot comes spontaneously into being.  Robots are all constructed, and the ones that have impact on the economy are constructed by the rich which is creating a fundamental shift in the power and availability of artificial intelligence, and its impact on everyday lives. It creates a moral hazard to dump responsibility into a pit that you cannot sue or punish.Résultat de recherche d'images pour "pictures of legal robots"

Some people really expected AI to replace humans. These people don’t have enough direct, personal experience of AI to really understand whether or not it was human in the first place.

There is no going back on this, but that isn’t to say society is doomed.

The word “robot” is derived from the Czech word for “slave.”

Lets keep it that way: I am all for Technological self-reproduction – Slaves.

Unless we can re calibrate our tendency to exploit each other, the question may not be whether the human race can survive the machine age – but whether it deserves to.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Advertisements