( A Fifteen minute read)
John McCarthy, inventor of the programming language LISP, coined the term “artificial intelligence” in 1955. The notion of intelligent automata, as friend or foe, dates back to ancient times.
You might think with the state of the world we live in that this is some what a naive subject. If you are like me, when it comes to Algorithms I have little or no understanding other than they are beginning to reshape my living life.
Ironically, in the age of the internet and unparalleled access to information, the most critical questions are out-of-bounds.
While the web has broken down the boundaries between different nations, so you can read a blog by anybody, anywhere in the world, on the other hand all our laws and governments remain in national boundaries. Outside of that we have very limited amount of effective governance, collaboration and co-operation and understanding.
Moreover, while we are clearly pretty good at producing knowledge, using this knowledge – that is separating the wheat from the chaff and integrating this together into something useful – is a big problem particularly in fields such as global sustainability.
One of the things we ought not to do is to press full steam ahead on building super intelligence without giving thought to the potential risks. Even if the odds of a super intelligence arising are very long, perhaps it’s irresponsible to take the chance.
As far as I am aware there are no current regulation or laws governing the use of AI. It is penetrating all nooks and nannies, de-privatizing us, turning us into points at job interviews, with algorithm replaced the loan officer.
They are fundamentally reshape the nature of work.
So what will happen when a computer becomes capable of independently devising ways to achieve goals, it would very likely be capable of introspection—and thus able to modify its software and make itself more intelligent. In short order, such a computer would be able to design its own hardware avoiding any laws, ethics, or any human morality.
A case in kind is in the area of autonomous weapon systems ie Drones.
While I am fully aware that the world faces many problems that could be solved by Artificial Intelligence we must before it’s too late give AI a set of values. And not just any values, but those that are in the best interest of humanity. This is the essential task of our age and since humans will never fully agree on anything, we’ll sometimes need it to decide for us—to make the best decisions for humanity as a whole.
How, then, do we program those values into our (potential) super intelligences? What sort of mathematics can define them? These are a few of the problems.
We’re basically telling a god how we’d like to be treated. How to proceed?
It’s tempting to dismiss the notion of highly intelligent machines as mere science fiction,” Hawking and others wrote in a recent article.” But this would be a mistake, and potentially our worst mistake ever.
There is no doubting in many ways, AI innovations could simply help scientists to do their jobs more efficiently – thereby cutting the crippling time lag between science and society. They would have the insight and patience (measured in picoseconds) to solve the outstanding problems of nanotechnology and spaceflight; they would improve the human condition and let us upload our consciousness into an immortal digital form.
Algorithms that ‘learn’ from past examples relieve engineers of the need to write out every command.
Indeed if humanity has to leave earth there will be a need for such machines.
For example, could machine learning algorithms delve deep into the previous five assessment reports of the Intergovernmental Panel on Climate Change and, based on research published since the last report, provide rudimentary conclusions of the sixth report?
Potential future uses of AI programs like AlphaGo could include improving smartphone assistants such as Apple’s Siri, medical diagnostics, and possibly even working with human scientists in research.
AI could have many benefits, such as helping to aid the eradication of war, disease and poverty.
But if we want unlimited intelligence, we had better figure out how to align computers with human needs before the intelligence of machines exceed that of humans—a moment that futurists call the singularity. It is vital that humans programme robots to understand the “full spectrum of human values”, because the stakes are very high. After all, if we develop an artificial intelligence that doesn’t share the best human values, it will mean we weren’t smart enough to control their own creations.
Technology is take on increasingly personal roles in people’s daily lives, and will learn human habits and predict people’s needs. Anyone with an iPhone is probably familiar with Apple’s digital assistant Siri.
For example, AI could make it easier for the company to deliver targeted advertising, which some users already find unpalatable. And AI-based image recognition software could make it harder for users to maintain anonymity online.
If we look at current state of affairs a 2013 study by Oxford University estimated that Artificial Intelligence could take over nearly half of all jobs in the United States in the near future. Automation has become an increasingly common sight the number of robots in factories across the world rose by 225,000 last year, and will rise even further in the coming years – and it is not just in manufacturing.
AI is only getting better, as computational intelligence techniques keep on improving, becoming more accurate and faster due to giant leaps in processor speeds.
Perhaps we should first ask, does science need disrupting? Yes.
Access to reliable knowledge – the academic literature – is becoming a fundamental bottleneck for humanity. There are now over 50 million research papers and this is growing at a rate of over one million a year. Over 70,000 papers have been published on a single protein – the tumor suppressor p53.
How can any academic keep up? And how can anyone outside of academia make sense of it all – the public, policy makers, business people, doctors or teachers? Well, most academics struggle and the public can’t – most research is locked behind pay walls.
With techniques like deep learning (Deep learning,” that allow a computer to do things such as recognize patterns from massive amounts of data. For example, in June 2012, Google created a neural network of 16,000 computers that trained itself to recognize a cat by looking at millions of cat images. For a computer to recognize a picture of a cat, the machine has no volition, no sense of what cat-ness is or what else is happening in the picture, and none of the countless other insights that humans have.) laying the groundwork for computers that can automatically increase their understanding of the world around them.
However possessing human like intelligence remains a long way off and what is called the singularity,” when machine intelligence exceeds human intelligence is still in the realms of science fiction.
That said Stephen Hawking has warned that because people would be unable to compete with an advanced AI, it “could spell the end of the human race.”
AI misunderstand what computers are doing when we say they’re thinking or getting smart.
Considering that the singularity may be the best or worst thing to happen to humanity, not enough research is being devoted to understanding its impacts.
In some areas, AI is no more advanced than a toddler.
Yet, when asked, many AI researchers admit that the day when machines rival human intelligence will ultimately come. The question is, are people ready for it?
Regardless of how artificial intelligence develops in the years ahead, almost all pundits agree that the world will forever change as a result of advances in AI.
The AI genie has already been released from the bottle and there is no way to get it back in.
No one is suggesting that anything like super intelligence exists now. In fact, we still have nothing approaching a general-purpose artificial intelligence or even a clear path to how it could be achieved. Recent advances in AI, from automated assistants such as Apple’s Siri to Google’s driverless cars, also reveal the technology’s severe limitations.
The problem is that a true AI would give any one of these companies( Microsoft, Apple, Google, Facebook, you name them) an unbelievable advantage.
For example, Google has the Google app, available for Android phones or iPhones, which bills itself as providing “the information you want, when you need it.
Google now can show traffic information during your daily commute, or give you shopping list reminders while you’re at the store. You can ask the app questions, such as “should I wear a sweater tomorrow?” and it will give you the weather forecast. Given how much personal data from users Google stores in the form of emails, search histories and cloud storage, the company’s deep investments in artificial intelligence may seem disconcerting.
Advances in technology will push more and more companies to favour capital over labour, they will leave the majority behind.
That may be about to change. Here below are five ways AI looks set to disrupt science.
1. Science mining #1: Iris.AI
2. Science mining #2: Semantic Scholar
3. From miner to scientist
4. Science media: Science Surveyor
5. Open Access AI: Open.ai
The short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”
After all, AI systems aren’t consumers and consumers are the sine qua non of economic growth. Hairdressers are judged to be less likely to be out of a job in 20 years than economists.
Perhaps the problem is in the description ( Artificial Intelligence) AI intelligence will not necessarily lead to sentience.
But what if intelligent machines are really just a new branch on the tree of evolution that has led us from the original Protists to where we are today?”
A species to be aided in its evolutionary process by another species called us.
The idea that computers will eventually develop the ability to speak and think with a conscious.
It’s a race between technology and education.
The mindset of the government and people have not adjusted to view the future, even though technology is exploding this decade into a world of the Internet of Things and the propulsion into artificial intelligence.
No one gains if the world’s Intelligence ends up in the hands of a few.
As artificial intelligence becomes a much more “dominant” force in future it will poses “commercial and ethical questions”
What, after all, is an android but a puppet with a computer program pulling its strings?
When I tell my phone I’m hungry and feel like eating Chinese it raises a really interesting question: Who is Siri working for? Is Siri working for me? Is it Siri’s job to find me the best Chinese meal or is Siri working for Apple and trying to get as much money as possible for Apple by auctioning the fact that they have a hungry consumer attached to it and desperate for food? The ethical debate is about who does AI work for.”
Every time you open a new social media site you can create completely new rules of the road and I think we’ll move beyond some of the things we have today.
One of the big challenges will be preserving those existing identities while creating a global culture.
We need a global culture to be able to talk about refugees and finance and tackle issues like global warming and science, and cure cancer. For these huge challenges we need to use the web to work as a whole planet, like one team.
What will make a massive difference is if we manage to design democratic, and scientific and collaborative systems which allow us to function as a planet.”
David Levy believes that, in the 2050 age, human and robots can be able to marriages with each other and it will be legal activity in many countries. But that’s was only a someone’s opinion, not a theory based or any legal law.
Why most AI are Female’s ? “.
What is hard is imagining how we humans will fit into a robot-filled future.
Finally, there is no end to the ways that humans can productively work with one another if they are no longer driven by the conflicts of scarcity. Perhaps we will learn to love our robots.
An after thought. 6/Oct/ 2016.
There is extraordinary potential for AI in the future.
But it’s not the future that I wish to address rather the present.
AI is already making problematic judgements that are producing significant social, cultural, and economic impacts in people’s everyday lives. AI and decision -support systems are embedded in a wide array of social institutions from influencing who is released from jail to shaping the news we see.
The results or impact is hard to see. It is critical to find rigorous ways to make them visible and accountable. We need to know when automated decisions are materially affecting our lives, and if necessary , to contest them.
This won’t be achievable by the United Nations, or National Governments.
Will there be enough good jobs to keep the global economy growing?
This is not the same as acting as a food stuff, where the existence of an earlier species acts as the food or fuel that allows those higher up the chain to exist and evolve.
selective breeding (unnatural selection), where human intervention is used to provide a characteristic,
the first option [is the] the evolution of some very clever tools, weapons, and body parts that become an integral part of the human species tree; or the second option … a new branch on the tree of evolution; or the third option an extension of the human branch.”
The greatest worry is the number of jobs that artificial intelligence systems are poised to take over.
Most of the best jobs that will emerge will require close collaboration between humans and computers.
As some professions become obsolete, more knowledge may not lead to higher pay either, because everyone will be bidding for the same work, which could drive wages down.
such as the promise of a guaranteed income to ensure people do not fall into the cracks. Others argue that a negative income tax would be better because it incentivises work.