Tags

, , , , , , ,

 

( Seven Minute read )

Who programs the programmers?

Soon enough, it might not be people behind the development of advanced machines learning and artificial intelligence but other AI.

This will drastically reduce the human input required.

We must not be blinded by science, nor held captive by unfounded or fantastic fears.Afficher l'image d'origineI have previously posted blogs putting the case that all technology (whether it be atomic energy or nanotechnology, bioengineering or DNA mutilation, or Artificial Intelligence) should be subject to examinations by a New World Organisation, that is totally independent and transparent.

( It’s imperative that we do not leave such examinations to the whims of the marketplace nor the cost-benefit calculations of a given quarter to marinate Artificial Intelligence into a sense of human complacency.)

I have also stated that I am pro all technology that benefits mankind as a whole. However it is critical that those individuals who are on the front lines of research be thinking about the implications of their work.

The other day on arrival at Gatwick I was admitted by an Algorithm into the UK.

Since this Algorithm was focus by definition to be based on narrowly defined problems, it got me thinking, who or what wrote the software in the first place.

The ethics of artificial intelligence are non existence.

Whether we are aware of it or not, we are already moving into the era of AI where IBM’s Watson, Google’s AI, Apple’s Siri and Amazon’s Echo will be your new companion.

Once AI can analyze a person’s affective state it will be able to influence it.

Humans are driven by emotions, making a crucial component of perception, decision-making, learning, and more. 

Artificial intelligence is not yet emotional in the same ways that humans are, but it  won’t be long with all the data collection before this is achievable to prompt certain responses and induce desired emotions. 

Creepy or worse predatory.

What happens when one of the human negotiators has an emotionally aware assistant in is the corner.     

  Every decision that mankind makes is going to be informed by a cognitive system like Watson. That future is actually much closer than you think.

To be or not to be. “Are you a robot?” “What?! No I am a real person.”

Afficher l'image d'origine

For example:

Militaries are among the intense users of high-technology, and the adoption of that equipment has transformed decision-making throughout the chain of command. The removal of human beings from the act of killing and from war.

There must be a way to ensure that Artificial Intelligence that is introduced into what ever field of Technology is not dominated by those who have a stake in the expansion of AI for Profit Sake.

There is no excuse for not being aware of the risks that such AI carries for all of us.

These questions have been with us for a long time:

Alan Turing in 1950 asked whether machines could think and that same year writer Isaac Asimov contemplated what might happen if they could in “I, Robot.” (In truth, thinking machines can be found in ancient cultures, including those of the Greeks and the Egyptians.)

About 30 years ago, James Cameron served up one dystopia created by AI in “The Terminator.” Science fiction became fact in 1997 when IBM’s chess-playing Deep Blue computer beat world champion Garry Kasparov.

As the Internet and digital systems penetrate further each day into our daily lives, concerns about artificial intelligence (AI) are intensifying.

It is difficult to get exercised about connections between the “Internet of Things” and AI when the most visible indications are Siri (Apple’s digital assistant), Google translate and smart houses, but a growing number of people, including many with a reputation for peering over the horizon, are worried.

Nevertheless, a debate about prospects and possibilities is worthwhile.

We need to ensure that boundaries are set, not just for research but for all the applications of ARTIFICIAL INTELLIGENCE.

As the Internet and digital systems penetrate further each day into our daily lives, concerns about artificial intelligence (AI) are intensifying. It is difficult to get exercised about connections between the “Internet of Things” and AI when the most visible indications are Siri (Apple’s digital assistant), Google translate and smart houses, but a growing number of people, including many with a reputation for peering over the horizon, are worried.

Recently, there has been a growing chorus of concern about the potential for AI.

It began last year when inventor Elon Musk, a man who spends considerable time on the cutting edge of technology, warned that with AI “we’re summoning the demon.” In all those stories with the guy with the pentagram and the holy water, and he’s sure he can control the demon. It doesn’t work out.” For him, AI is an existential threat to humanity, more dangerous than nuclear weapons.

The possibilities created by “big data” are driving increasing automation and in some cases AI in the office environment.  Legal and administrative frameworks to deal with the proliferation of these technologies and AI have not kept pace with their application. Ethical questions are often not even part of the discussion.

And since their focus tends to be on narrowly defined problems, others who can address larger issues should join the discussion. This process should be occurring for all such technologies.

A month later, distinguished scientist Stephen Hawking told the BBC that he feared that the development of “full artificial intelligence” could bring an end to the human race. Not today, of course, but over time, machines could become both more intelligent and physically stronger than human beings. Last month, Microsoft founder Bill Gates joined the group, saying that he did not understand people who were not troubled by the prospect of AI escaping human control.

More recently, Google’s AlphaGo software beat South Korean Go champion Lee Sedol in series of matches pitting human against software in a board game that apparently has more possible positions than there are atoms in the universe.

What’s more amazing about Alpha Go, unlike Deep Blue before it, was that it was not specifically programmed to play Go – it learned to play the game using a general-purpose algorithm.

The big question is what can be done? If anything, or is it to late.

None of the darker visions have deterred researchers and entrepreneurs from pursuing the field. It is hard to fear AI when the simplest demonstrations are more humorous than hair-raising.

The prevailing view among software engineers, who are writing the programs that make AI possible, is that they remain in control of what they program.

But are they really? I think not.

The prevailing view among software engineers, who are writing the programs that make AI possible, is that they remain in control of what they program.

Even if true AI is a far-off prospect, ethical issues are emerging every day.

Artificial intelligence or AI is now getting a foothold in people’s homes, starting with the Amazon devices like its Echo speaker which links to a personal assistant “Alexa” to answer questions and control connected devices such as appliances or light bulbs. Echo’s main advantage is that it connects to Amazon’s range of products and services telling devices to tend to tasks such as ordering goods, checking traffic, making restaurant reservations or searching for information. It also connects to various third-party services like Uber and Domino’s Pizza, so you can just call for a car or a pizza delivery by just telling the Echo what you want.

IBM, whose Watson supercomputer systems are offering “cognitive health” programs which can analyze a person’s genome and offer personalized treatment for cancer, for example.

Google recently announced it had developed an algorithm which can detect diabetic retinopathy, a cause of blindness, by analyzing retina images.

Amazon is seeking to put AI to work in the supermarket—testing a system without cash registers or lines, where consumers simply grab their products and go, and have a bill tallied by artificial intelligence.

Facebook just recently introduced its AI-based Deep Text analytics engine which is said to be able to scan and understand the textual content of thousands of posts per second in more than 20 languages, all with nearly human-like accuracy.

Machine learning is already being used extensively in the social networking site to make sense of and translate some two billion News Feed items per day and the company is planning to use AI to recognise images and allow users to search for photos based on the content in those photos.

The artificial intelligence (AI) component in these programs aims to make create a world in which everyone can have a virtual aide that gets to know them better with each interaction.

AI prowess to make smartphones smarter—Google Allo messenger can, for example, suggest a meeting or deliver relevant information during a conversation. To infuse smartphones and other internet-linked devices with software smarts that help them think like people.

The prospect of AI escaping human control is advancing day by day.

Researchers most deeply engaged in this work are more sanguine. The head of Microsoft Research dismissed Gates’ concern, saying he does not think that humankind will lose control of “certain kinds of intelligences.” He instead is focused on ways that AI will increase human productivity and better lives.

At what cost?

No Algorithm understand the unwritten social behaviors used in daily life, which can vary from one culture to another. More work needs to be done to improve “social intelligence,” or understanding the subtleties of our everyday decisions.

However, the real question on everybody’s minds is – is the rush to get to true AI another step towards Skynet, Terminators and HAL 9000?

Just ponder on this for a moment – if a computer could truly be “smart”, it would soon see that humans are basically the cause of most environmental problems and would come up with an extinction solution that would solve all issues in one fell swoop.

Humans are limited by slow biological evolution and would not be able to compete with software that can redesign itself and evolve faster than any human could.

So what is there to prevent AI from gaining sentience and killing us all?Afficher l'image d'origine

How one can manage something that is sentient is another question altogether.

As we already have industrial robots replacing us in tiresome and repetitive jobs, we might ask ourselves if they’re not going to replace us in all domains?

The population with mobile devices now outnumbering and multiplying faster than humans.

AI and automation provide an opportunity to move beyond business as usual. The global affective computing market is estimated to be 9.3. billion $ a year. By 2020 it will be in the region of 50 billion.

It’s no wonder that the darker visions have not deterred researchers and entrepreneurs from pursuing the field.

We need to remain vigilant on the uses and changes of AI, and maybe even prepare ourselves for a new world where a good part of normal, information research work will die out.

Let’s hope that, should this happen, it will be to the benefit of creative arts which remain entirely ours.

We might already be in the midst of creating a conscious entity of a whole new “utterly inhuman” kind.  Now that would be scary.Afficher l'image d'origine

Perhaps the only solution is a whistle-blower Algorithm.

All comments welcome, all like clicks chucked in the Bin.  

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Advertisement