Tags

,

 

(Ten-minute read) 

When it comes to the topic of artificial intelligence, it seems that everyone has an opinion, but are we asking the right questions.

We can avoid the development of AI, but we should all agree to guide AI development towards innovations that benefit all of humanity that government regulation should enforce to ensure the tech does not go rogue.

Future regulations will need to be flexible enough to accommodate different requirements, data types, and possible new uses of algorithms in the future. 

In fact, considering some of the human political players in the world today, it’s probably not the AI revolution that we need to be worried about. 

Just as there are dangers with anything powerful, we can’t help but imagine but prepare ourselves for life decided by algorithms. As Ai has now designed a quantum physics experiment beyond what any human has convived we should all perhaps start thinking about how to regulate its development. 

Experts expressed similar concerns about quantum computers and lasers and nuclear weapons—applications for that technology can be both harmful and helpful.

                                      ————————-

Is Ai now something that represented a fundamental risk to the existence of civilization?

When this happens, AI will become incredibly sophisticated, and this is where the worrying starts.

 “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” “The one who becomes the leader in this sphere will be the ruler of the world.” Theoretical physicist Stephen Hawking,.

“The first generation [of AI] is just going to do what you tell them; however, by the third generation, then they will have their own agenda,” Seth Shostak said in an interview with Futurism. In other words, humans will simply become immaterial to these hyper-intelligent machines.

These words are warning all and sundry about the potential dangers of using AI.

In 2010, Swiss neuroscientist Pascal Kaufmann founded Starmind, a company that plans to use self-learning algorithms to create a “superorganism” made of thousands of experts’ brains.

Not everyone believes the rise of AI will be detrimental to humans; some are convinced that technology has the potential to make our lives better.

However, it is beyond the argument that it is not going to make massive changes in our lives.

What is not agree on, however, is whether these will be good or bad changes.

                                 ————————

AI might be neutral but it is becoming pervasive in most, if not all, aspects of decision-making in the foreseeable future with the potential to transform the world.

On the surface, we are now at the beginning of this change but when AI becomes capable of complex decision-making that could one day translate into real-world tasks these decisions will be made by self-learning algorithms.

Predicting the future is a delicate game.

Decision-making processes based on algorithms and machine learning still have a long way to go but already Decision-Making is No Longer a Human Exclusive. 

Without eliminating biases in these technologies which are also inherent in human decision-making, they won’t provide much more useful than the emotional responses that currently drive human decision-making.

We don’t yet know whether AI will usher in a golden age of human existence, or if it will all end in the destruction of everything humans cherish.

“People’s deepening dependence on machine-driven networks will erode their abilities to think for themselves.

Today, humans primarily use AI for insights, but AI’s skills could surpass human abilities at every step in the process. AI is already improving in predictive analytics, steadily making its way to the right, toward prescriptive outcomes recommending specific options.

AI is currently tasked with decision-assistance, not autonomous strategic decision-making.

Why? 

Because neither humans nor AI is performing well in complex systems. 

Because it is only retrospectively that one can establish cause and effect but our current mental frameworks may not be versatile enough to navigate and manage constant unpredictable change as AI evolves fast. 

However, this raises the question of choice: do we proactively decide on our position in the value chain or see ourselves being imposed on a given spot. 

If we do not fundamentally redesign our education and strategic frameworks to create more AAA leaders, we may see that choice made for us.

However, AI will certainly keep learning—even beyond complicated—as algorithms will no longer rely on only a range of right answers:

They do not have to reach general artificial intelligence nor become exceptional at handling complex systems, just better than us.

How algorithms are used by government, business and public bodies will ultimately
determine the level of regulation required for this technology.

We can only rely on our predictions of what we already have, and yet it’s impossible to rule anything out. What is clear, though, is that thanks to AI, the world of the future could bear little resemblance to the one we inhabit today. 

If you have a smartphone, you’re already using AI and it is fast becoming a major economic force.

There’s little question that AI has the potential to be revolutionary.

Automation could transform the way we work by replacing humans with machines and software.

 

                                     _______________

Artificial intelligence is not a thoroughly modern concept.

The concept is old, but we’ve only recently been able to produce the tech to back it up.

We are now embarked on something novel and uncertain, shaped by forces so vast and powerful as to be almost unfathomable. Yet fathom them we must.

A better understanding of our own brains would not only lead to AI sophisticated enough to rival human intelligence, but also to better brain-computer interfaces to enable a dialogue between the two.

Musk has founded a company called Neuralink intended to create a brain-computer interface. Linking the brain to a computer would, in theory, augment the brain’s processing power to keep pace with AI systems.

The pandemic-induced acceleration of technology will also prove a flagbearer for sustainable technologies and will be used against fighting climate change by reducing pollution levels and encouraging green AI research. 

In the years ahead, it is necessary to develop an ethical AI ecosystem without human biases and this might alleviate the potential risks of AI in the future. If we do not find a way to regulate AI and the tech giants, we risk becoming slaves to their algorithms.

At the moment we have neither the law nor the language to comprehend the revolutions we are lived through – let alone those to come.

The very nature of the technology we have unleashed is only going to speed up.

AI is already more intelligent – that is, better at solving some problems – than humanity. It holds immense potential, especially in medicine, and peril, in particular, if computers learn so quickly that they pursue ends for themselves rather than for human masters the implication is dystopian and terrifying, namely, that it could master humans.

We have all been hacked by computers who threaten to know us better than ourselves.

Technology gave us dominion over nature, which saints and poets had hitherto attributed to God. But AI is a new kind of tool, which if we are not careful, will turn all of us into tools.

The age of algorithms enriched all our lives and made a few astonishingly rich.

It is now inviting us along a new road to serfdom. It almost appears like technology took over us, in sight of time and priorities, it is already crystal clear like Climate change that apocalypse always comes without warning. 

As of now, let us be glad about the extent we came up with AI, the solution is on cards and lets us hope that human beings will take over technology just before the latter does so.

All human comments are appreciated. All like clicks and abuse chucked in the bin.