Artificial Intelligence might be a term for collecting concepts that allow computer systems to vaguely work like a brain. However, the use of numbers to represent complex social reality is flawed.
AI might seem factual and precise when it isn’t as the results that AI produces depend on how it is designed and what data it uses.
At the moment in our everyday world, AI performs narrow tasks such as facial recognition, natural language processing, or internet searches but the pace of its progress is exponential and regardless of its benefits.
The impact it is having is hard to ignore with more and more of the world’s commerce becoming automated and trading going online.
It’s transforming our world and will impact all facets of society, economy, living, working, healthcare, and technology in the not-so-distant future. It’s poised to have a major effect on sustainability, climate change, and environmental issues.
Anybody making assumptions about the capabilities of intelligent software is capping out at some point is mistaken.
The applications of AI is now in, industry, healthcare, and medical diagnostics, transport, agriculture, education, and economics, machine and deep learning, data analytics, knowledge reasoning, and discovery, natural language processing, computer vision, robotics, as well as social sciences, ethics, legal issues, and regulation all have implementations in modern society.
And that is only a drop in the ocean.
It’s in automated reasoning and inference, autonomous agents and multi-agent systems, artificial consciousness, case-based reasoning, representation, neuro-inspired computing, process improvement and planning, robotic process automation, symbolic reasoning.
AI-augmented immersive (VR, AR, MR & XR) reality, AI-enabled customized manufacturing, AI-enabled data-driven techniques, AI in cyber-physical systems, AI in image analysis and video processing, AI in perception and multimedia sensing, AI-supported sensors, IoT and smart cities, AI in education, autonomous vehicles, business and legal applications of AI, cognitive automation, hyper-automation, digital twins, healthcare, medical diagnosis and rehabilitation, robotics and robot learning, human-robot/machine interaction, industrial AI and optimization, symbiotic autonomous systems, as well as many others relating to AI.
Unless you have direct exposure to groups like Deepmind most of us have no idea where it is leading us and the possibility of something, seriously dangerous happening is in the five-year timeframe. 10 years at most.
The time is now to determine what dangers artificial intelligence poses.
Already the legal, political, societal, financial, and regulatory issues are so complex and wide-reaching that it’s necessary to take a look at them no not tomorrow.
AI’s role is now so widely accepted that most people are completely unaware of it.
If we don’t its usage will lead to separation and polarisation in the public sphere and manipulate elections.
It has permeated the key sectors of most developed economies with profit-seeking Algorithms that are promoted as if they are flawlessly but they are built on human bias.
It is removing the ability to make judgments and our sense of responsibility.
It cannot explain or justify reaching a decision or action in the first place.
It is creating impersonal bureaucracies and leaders.
It feeds on historical data.
Russia’s President Vladimir Putin said: “Whoever becomes the leader in this sphere will become the ruler of the world.”
Apart from autonomous weapons gaining “minds of their own, AI’s power for social manipulation has already proven itself – Brexit – the 2016 U.S. presidential election – the Arab spring – China’s social credit system – the invasion of privacy which is quickly turning to social oppression.
At the present, we are not designing accident-free artificial intelligence, or are we aligning current systems’ behavior with our goals or core values.
Mitigating risk and achieving the global benefits of AI will present unique governance challenges, and will require global cooperation and representation.
More advanced and powerful AI systems will be developed and deployed in the coming years, these systems could be transformative with negative as well as positive consequences.
As AI systems become more powerful and more general they may become superior to human performance in many domains. If this occurs, it could be a transition as transformative economically, socially, and politically as not the Fourth Industrial Revolution but the Revolution of Monopolies.
It’s now possible to track and analyze an individual’s every move online, and if a covid-19 passport comes into existence it will be issued by AI.
It’s capable of generating misinformation at a massive scale and if not already we won’t be able to tell what’s true or real online and what’s fake including Covid passports.
If we aren’t clear with the goals we set for AI machines, it could be dangerous if a machine isn’t armed with the same goals we have.
It’s not hard to imagine an insurance company telling you you’re not insurable based on the number of times you were caught on camera talking on your phone.
While there are many uncertainties, we should dedicate serious effort to laying the foundations for future systems’ safety and better understanding the implications of such advances.
The international governance of artificial intelligence (AI) is at a crossroads: should it remain fragmented or be centralized?
Fragmentation will likely persist for now.
Society’s collective governance efforts may need to be put on a different footing.
An important challenge is to determine who is responsible for damage caused by an AI-operated device or service:
It is undesirable from a human rights perspective that there are powerful
publicly-relevant algorithmic systems that lack a meaningful form of public scrutiny.
Without proper regulations and self-imposed limitations, critics argue, the situation will get even worse. It is gobbling up everything it can learn about you and trying to monetize it.
There is a real risk that commercial and state use has a detrimental impact on human rights.
Our situation with technology is complicated, but the big picture is rather simple.
All AI should be under law required to have a transparency switch.
The human brain is a magnificent thing that is capable of enjoying the simple pleasures of being alive. Ironically, it’s also capable of creating machines that, for better or worse, become smarter and more and more lifelike every day.
AI will affect what it means to be human, be productive, and exercise
free will. People will become even more dependent on networked artificial
intelligence (AI) in complex digital systems.
Every time we program our environments, we end up programming ourselves
and our interactions. AI have massive short-term benefits, along with long-
term negatives that can take decades to be recognizable. AI is a tool that will
be used by humans for all sorts of purposes, including in the pursuit of
power. At stake is nothing less than what sort of society we want to live in
and how we experience our humanity. We already face an ungranted
assumption when we are asked to imagine human-machine ‘collaboration.
We cannot expect our AI systems to be ethical on our behalf – they won’t be,
as they will be designed to kill efficiently, not thoughtfully.
For now, AI will continue to concentrate power and wealth in the hands of a
few big monopolies based on the U.S. and China. Most people – and parts of
the world – will be worse off.
Unfortunately, we are still unripe for the unity of humanity.
We require further development until we develop a sincere desire for
humanity’s unity, as well as the realization that it is impossible to achieve
that goal on our own. If we just bumble into this world of AI unprepared, it
will probably be the biggest mistake in human history.
The COVID-19 virus will one day be all but forgotten, but the dystopian
systems that the New World Order is right now putting in place will not.
All human comments are appreciated. ll like clicks and abuse chucked in the bin.