( A Thirty-minute read)
Do you have a right to believe what you want?
Yes, of course, but we now live in an Algorithmic driven world that is blurring the boundaries and amplifying the social tensions that are festering under the surface.
The problem is that we are allowing the building of technologies, that are making consequential decisions about people’s lives.
AI is shaping people’s lives on a daily basis, but it’s an open question whether AI will become a trusted advisor or even a corrupting force.
It’s not COVID-19 that will kill us all its Profit-seeking algorithms.
However, here in this post, my main concern is whether the AI techniques will develop into quantum algorithms that will be totally out of control.
If artificial general intelligence is on the not too distant horizon, surely we should be ensuring that it is not owned by anyone corporation and that at its core it respects our core values.
To achieve this we cannot surely let wealth be concentrated in fewer and fewer hands, or to be let to the marketplace, or any world organization that is not totally transparent and self-financing.
We therefore as a matter of grave urgency need a new world organization that vets all technology, and algorithms. (See previous posts)
As long as the ALGORITHMS don’t go to war with each other and cause something even more difficult to diagnose than a crash on the stock markets they are safe is as naive as saying ” It’s going to be Great.”
AlGORITHMS are increasingly in charge of a world that is precious to us all.
Basically, we’re entering the era of machines controlling everything.
If we want to create new different societies with human dignity for all we need to do something about it.
The difficulty of predicting the future is not just a cliche, it’s a basic fact of our existence. Part of the hypothesis of Singularity is that this difficulty is just going to get worse and worse. Yes, creating AGI ( Artificial General Intelligence) is a big and difficult goal, but according to known science, it is almost surely an achievable one.
However, there are sound though not absolutely confident arguments that it may well be achievable within our lifetimes.
If artificial general intelligence is on the not too distant horizon, surely we should be ensuring that it is not owned by anyone corporation and that at its core it respects our core values.
If we think in months we focus on immediate problems such as the present-day wars, the Covid crisis, the Donald Trumps, the economy, if we think in decades, climate, growing inequality, the loss of jobs to automation are all presenting dangers. But if we look at life in total, science is converging on data processing and AI that is developing itself with algorithms.
When intelligence is approached in an incremental manner, with strict reliance on interfacing to the real world through perception and action, reliance on representation disappears.
It won’t be long before we will not be unable to distinguish the real world from the virtual world.
Since there is only one real world and there can be infinite virtual worlds the probability that you will inhabit this sole world is zero.
So it won’t matter whether computers will be conscious or not.
Is starting to feel like it’s every man for himself, Is possible that right now, a global crisis is upon us, Without even knowing… And the virus may not be the biggest threat, but the crisis that follows, Everyday goods that keep us alive will be gone, I’m talking, food, freshwater, medicine, clothes, fuel…Intelligence is decoupling from consciousness and soon rather than later it will be consigned to Google, Facebook, Twitter, Smartphones, and the like to make decisions that are not possible to reverse.
You might think that the above is stupid but it won’t be long before we will be witnessing the most unequal societies in history.
——————————
We humans will soon be living with robots that process data without any subjective experiences or consciousness or moral opprobrium.
As we watch robots, autonomous vehicles, artificial intelligence machines, and the like slowly (and sometimes rapidly) permeate our world, it’s not hard to imagine them going from permeating to taking over.
Algorithms are increasingly determining our collective future.
It will only matter what they think about you.
We are already halfway towards a world where algorithms run everything.
This is why many of the issues raised in this post will require close monitoring, to ensure that the oversight of machine learning-driven algorithms continues to strike an appropriate and safe balance between recognizing the benefits (for healthcare and other public services, for example, and for innovation in the private sector) and the risks (for privacy and consent, data security and any unacceptable impacts on individuals).
——————————WHAT CAN GOVERNMENTS DO?
Please regulate AI, this is too dangerous.
Given the international nature of digital innovation, governments, should establish audits of algorithms, introducing certification of algorithms, and charging ethics boards with oversight of algorithmic decisions.
Why?
They are bringing big changes in their wake.
From better medical diagnoses to driverless cars, and within central governments where there are opportunities to make public services more effective and achieve long-term cost savings.
However, the Government should produce, publish, and maintain a list of where algorithms with significant impacts are being used within the Central Government, along with projects underway or planned for public service algorithms, to aid not just private sector involvement but also transparency.
Governments should not just simply accept what the developers of algorithms offer in return for data access.
To this end, Governments should be at the forefront of the creation of a “statutory building code”, which describes mandatory safety and quality requirements for digital platforms.
Social networks should be required by law to release details of their algorithms and core functions to trusted researchers, in order for the technology to be vetted.
This Law should enable the enforcement of,
forcing social networks to disclose in the news feed why content has been recommended to a user.
limiting the use of micro-targeting advertising messages.
making it illegal to exclude people from content on the basis of race or religion, such as hiding a spare room advert from people of color.
banning the use of so-called dark patterns – user interfaces designed to confuse or frustrate the user, such as making it hard to delete your account.
labeling the accounts of state-controlled news organizations.
limiting how many times messages can be forwarded to large groups, as Facebook does on WhatsApp.
If we took the premise that people should have a lawful right to be manipulated and deceived, we wouldn’t have rules on fraud or undue influence.
———————————–To days Algorithms and where we are.
As data accumulates, even more so now with Covid- 19 track and trace, and now working from home we have more centralized data depositories and large centralized AI models that work off centralized or decentralized data.
How does the concentration of power affect this balance that impinges on individual liberty?
Our democratic institutions and public discourse are underpinned by an assumption that we can at least agree on things that are true.
Facebook, Twitter, and YouTube create algorithms that promote and highlight information. That is an active engineering decision. Regardless of whether Facebook, Twitter profits from hate or not, it is a harmful by-product of the current design and there are social harms that come from this business model.
Platforms that monetize user engagement have a duty to their users to make at least a minimum effort to prevent clearly identified harms.
We have to focus on the responsibility of platforms.
Because people are being manipulated with objectively false information, there has to be some kind of accountability for platforms.
Currently, these platforms are not neutral environments they have no common understanding that there are certain things that are manifestly true with algorithms making decisions about what people see or do not see.
In most Western democracies, you do have the freedom of speech.
But freedom of speech is not an entitlement to reach. You are free to say what you want, within the confines of hate speech, libel law, and so on. But you are not entitled to have your voice artificially amplified by technology.
The way Facebook and other platforms approach this problem is:
We’ll wait and see and figure out a problem when it emerges. Every other industry has to have minimum safety standards and consider the risks that could be posed to people, through risk mitigation and prevention.
There are right now some objectively disprovable things spreading quite rapidly on Facebook. For example, that Covid does not exist and that the vaccine is actually to control the minds of people. These are all things that are manifestly untrue, and you can prove that.
However, algorithms are much more prevalent than that- the Apple Face ID algorithm decides whether you are who you say you are.
Algorithms limit people’s worldview, which can allow large population groups to be easily controlled. Social Media algorithms tuned to your desires and want’s ensures that everything on your feed will be of interest to you without you knowing what data these algorithms use and what they aim for.
Conclusion.
We are already living with large AI platforms that are monopolizing the fruits of globalization with billions being left behind.
With us accepting this as if natural.
It will be too late when we are asking ourselves. What’s more valuable – intelligence or consciousness?Then ask yourselves what happens to society, politics, and daily life when non-conscious but highly intelligent algorithms know us better than we know ourselves?
Whatever view one takes on artificial intelligence ethics.You can rest assured that we will see far more nut cases blowing themselves up, far more wars over finite resources, with vast movements of people.
We have to remember that self-regulation is not the same as having no regulation.
Of course, the loudest arguments for and against something often have one thing in common. They are often made by people with no desire to compromise or understand the other side.
I think self-regulation, in and of itself contemplates people in power, deciding how they will act.
We have to accept from history that we cannot possibly predict all adverse consequences of technology and that’s because it is not just technology that has adverse consequences, but the context in which is applied,
It is impossible to regulate AI while thinking about all of its potential adverse consequences. The seeds for harm at the design stage, or at the development stage, or at the deployment stage.
We don’t have to wait for the technology to become an application before we think of regulating it effectively.
There is a need to strengthen specific provisions to safeguard individual liberty and community rights when it comes to inferred data. There is a need to balance the trade-offs between the utility of AI and protecting privacy and data.
Self-regulation within the AI industry may not be enough since it may not solve the massive differential between the people developing the technology and the people affected by it. Machine learning is the next step that they are aiming for, with the algorithms deciding the input and outputcompletely.
Inherent political and economic power hierarchies between the state and citizens and within the private sector need to be addressed because the promise of globalization is a lie when it comes to AI and prosperity for all.
Algorithms are being used in an ever-growing number of areas, in ever-increasing ways, however, like humans, they can produce bias in their results, even if unintentional. We are all becoming redundant with biotechnology becoming only available to the riches of us.
I don’t think that AI per se can be regulated because today it is AI, tomorrow it will be Augmented Reality or Virtual Reality, and the day after tomorrow it may be something that we can’t even think of right now.
So it is important to have checks and balances in the use and access to AI that go beyond just technological means.
Why?
Because they are also moving into areas where the benefits to those applying them may not be matched by the benefits to those subject to their ‘decisions’—in some aspects of the criminal justice system, for example.
However, technology companies are not all the same, and nor is technology the only part of the media ecosystem.
It is essential to ensure a whole society response to tackle these important issues.
You could require algorithms to have a trigger TO SHUT OF – to stop misinformation or terrorist groups using social media as a recruiting platform.
BUT who defines what counts as misinformation?
It is no longer possible for humans to fact-check so the only course of action is a world Independent Universal Algorithm that is designed to establish fairness.
While “fairness” is much vaguer than “life or death,” I believe it can – and should – be built into all AI using their algorithm.
Therefore every Social network should display a correction to every single person who was exposed to misinformation if independent fact-checkers identify a story as false.
(Google’s search algorithm is more closely guarded than classified secret documents with Google Algorithm’s that now owns most of the largest data sets in the world stored in its cloud.)
——————–
We now have algorithms fighting with each other for supremacy on the market, prey on other algorithms in order to blunder the world exchanges for profit to such an extent that they now effectively in control of capitalism. Take for instance, when someone says algorithmic trading, it covers a vast subject not just buying and selling large volumes of shares automatically at very high speeds by unsupervised learning algorithms.
There are four major types of trading algorithms.
There are:
Execution algorithms
Behavior exploitative algorithms
Scalping algorithms
Predictive algorithms
Transparency must be a key underpinning for algorithm accountability.
Why?
Because it will make it easier for the decisions produced by algorithms to be explained.
(The ‘right to explanation’ is a key part of achieving accountability and tackling the ethical implications around AI.)
We are only on the outskirts of mind science that presently knows little about how the mind works never mind consciousness. We have no idea how a collection of electric brain signals creates subjective experiences however we are conscious of our dreams.
99% of our bodily activities take place without any conscious feelings.
As neuroscientists acquired more and more data about the workings of the brain, cognitive sciences, and their stated purpose is to combine the data from numerous disciplines so as better to understand such diverse phenomena as perception, language, reasoning, and consciousness.
Even so, the subjective essence of “what it means” to be conscious remains an issue that is very difficult to address scientifically.
To really understand what is meant by the cognitive neurosciences, one must recall that until the late 1960s, the various fields of brain research were still tightly compartmentalized. Brain scientists specialized in fields such as neuroanatomy, neurohistology, neuroembryology, or neurochemistry.
Nobody was yet working with the full range of investigative methods available, but eventually, the very complexity of the subject at hand-made that a necessity.
The first problem that arises when examining consciousness is that a conscious experience is truly accessible only to the person who is experiencing it. Despite the vast knowledge we have gained in the field of mathematics and computer science, none of the data processing systems we have created needs subjective experiences in order to function.
None feel pain, pleasure, anger, or love.
These emotions are vanishing into algorithms that are or will have an effect on how we see the world but also how we live in it.
If not address now all moral and political values will disappear, turning consciousness into a kind of mental pollution. After all, computers have no minds.
Take images on Instagram they can affect mental health and body image.
You might say so what that has always been the case. And you would be right up to now but because of Covid-19 government has given themselves wide-ranging powers to collect and analyze data, without adequate safeguards.
If we are not careful they will have no notion of self, existing only in the present unaware of the past or future, and therefore will be unable to consciously plan for future eventualities.
Unconscious algorithms in our brains rather than conscious images in a mind.
If you are using a smartphone, it indirectly means that you are enjoying the AI knowingly or unknowingly. It cannot be modified unknowingly or can’t get disfigured or breakdown in a hostile environment.
We should not be regulating technology but Artificial Intelligence.
It is so complicated in behavior we need to be regulated it at the data level.
In lots of regulated domains, there is this notion of post-market surveillance, which is where the developer bears the responsibility of how the technology developed by them is going to be used.
As William Shakespeare wrote in – As you Like it.
” All the world is a stage, and all the men and women merely players, they have their exits and entrances. ”
Sadly with AI, Machine Learning Algorithms no one knows or for that matter will ever know when they enter or exit.
Probably like AI learning is actually an ongoing process that takes place throughout all of life. It’s the process of moving information from out there — to here. Unfortunately with the brain, has its own set of rules by which it learns best, unlike AI, the information doesn’t always stick. Together, we have a lot to learn.
Humanity is in contact with humanity.
All human comments appreciated. All like clicks and abuse chucked in the cloud bin.
social media oligarchy where the richest participants are allowed to spread dangerous
.