Tags

, , , , , ,

( A three-minute read)

I AM SURE THERE IS NO NEED HERE FOR ME TO REMIND YOU THAT TECHNOLOGY IS NOT ONLY CHANGING THE WAY WE CONDUCT OUR LIVES BUT THE WAY WE WILL EXIST IN THE FUTURE. 

There is a wonderful aspiration by the writer Isaac Asimov introduced in 1942:

A robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey the orders given it by human beings, except when such orders would conflict with the previous law; and a robot must protect its own existence as long as such protection does not conflict with the previous two laws.

I call the above an aspiration because as we all know that this will never happen.

We now have AI learning from AI, engage in cyberbullying, stock manipulation or terrorist threats. We also have AI surveillance, both private and public collecting data with or without permission.

A.I. systems don’t just produce fake tweets; they also produce fake news videos.

What, exactly, constitutes harm when it comes to A.I.?  No one knows.

AI systems, are already no longer limited to a single set of tasks.

On the other hand, we need AI to tackle world future problems, such as climate change, threats from space, immigration, and sustainability itself.

There is no argument that regulation of what I call essential AI should be avoided.

However, my A.I. for profit or exploitation did it should not excuse illegal behavior.

A.I. system must clearly disclose that it is not human. As we have seen in the case of bots — computer programs that can engage in increasingly sophisticated dialogue with real people — society needs assurances that A.I. systems are clearly labeled as such.

We must ensure that people know when a bot is impersonating someone.

We must ensure that A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information. Because of their exceptional ability to automatically elicit, record and analyze information, A.I. systems are in a prime position to acquire confidential information.

We must ensure that an A.I. system must be subject to the full gamut of laws that apply to its human operator. This rule would cover private, corporate and government systems.

Unfortunately none of the above is possible.

This is why I favor ( In the interest of caution)  that we establish A World Technological Strong Room, just like the World seed bank where all software programmes are held and available to everyone.

Of course, it would be totally naive to think that all AI should be subject to scrutiny.

It’s not military nor intelligence AI that I am talking about, it is AI that is created for exploitation for profit.

Rather than regulating what AI systems can and can’t do that make it more expensive to develop AIs the strong room would hold the founding programme.

Because Artificial intelligence systems are now learning from each other and have the potential to change how humans do just about everything. We must ensure all AI has an impregnable “off switch.”

How can this achieve? and by whom.

This is where I am open to suggestions.

It could be A United Nations Cloud Strongroom, run by a world people algorithm that copies all existing software and Algorithms.

Companies making and selling AI software will need to be held responsible for potential harm caused by “unreasonable practices” Any sufficiently transformative technology is going to require new laws and New legislation that isn’t imminent.

All human comments appreciated

. All like clicks chucked in the bin.

 

 

 

 

 

 

 

 

Advertisements