So should we be more wary of their power?
Their lack of accountability and complete opacity is frightening.
Some time ago I advocated that there should be a legal requirement that all software programmes/ algorithms be subject to regulation to create an oversight that would assess the impact of algorithms before it becomes alive.
Other words a virtual total transparent deposit world bank where the original programmes are held and vetted to comply with the core principals of humanity.
That, by itself, is now a tall order that requires impartial experts backtracking through the technology development process to find the models and formulae that originated the algorithms.
Who is prepared to do this? Who has the time, the budget and resources to investigate and recommend useful courses of action?
This is a 21st-century job description – and market niche – in search of real people and companies outside political manipulation. In order to make algorithms more transparent, products and product information circulars might include an outline of algorithmic assumptions, akin to the nutritional sidebar now found on many packaged food products, that would inform users of how algorithms drive intelligence in a given product and a reasonable outline of the implications inherent in those assumptions.
At the moment they perform seemingly miraculous tasks humans cannot and they will continue to greatly augment human intelligence and assist in accomplishing great things. Also, our accelerating code-dependency will continue to drive the spread of algorithms; however, as with all great technological revolutions, this trend has a dark side.
There is no argument that the efficiencies of algorithms will lead to more creativity and self-expression.
However, to days algorithms are primarily written to optimize efficiency and profitability without much thought about the possible societal impacts of the data modelling and analysis.
Humans are considered to be an “input” to the process and they are not seen as real, thinking, feeling, changing beings.
This is creating a flawed, logic-driven society and that as the process evolves – that is, as algorithms begin to write the algorithms – humans may get left out of the loop, letting “the robots decide.
Algorithms will capitalize on convenience and profit, thereby discriminating [against] certain populations, but also eroding the experience of everyone else. The goal of to days algorithms is to fit some of our preferences, but not necessarily all of them: They essentially present a caricature of our tastes and preferences.
The fear is that, unless we tune our algorithms for self-actualization, it will be simply too convenient for people to follow the advice of an algorithm (or, too difficult to go beyond such advice), turning these algorithms into self-fulfilling prophecies, and users into zombies who exclusively consume easy-to-consume items.
It is not possible to capture every data element that represents the vastness of a person and that person’s needs, wants, hopes, desires. When you remove the humanity from a system where people are included, they become victims.
Dehumanization by algorithms has now spread to our police forces, to our legal systems, to our health care and social services, our politics – Brexit, Donal Trump.
So let’s ask a few questions.
Who is collecting what data points?
Do human beings the data points reflect even know or did they just agree to the terms of service because they had no real choice?
Who is making money from the data?
How is anyone to know how his/her data is being massaged and for what purposes to justify what ends?
Companies platforms like Google Facebook, Twitter, seek to maximize profit, not maximize societal good. Worse, they repackage profit-seeking as a societal good.
There is no transparency, and oversight is a farce.
We see already today is that, in practice, stuff like ‘differential pricing’ does not help the consumer; it helps the company that is selling things, etc.
With it, all hidden from view individual human beings will be herded around like cattle, with predictably destructive results on rule of law, social justice and economics.
There is at the moment only an incentive to further obfuscate the presence and operations of algorithmic shaping of communications processes. The fact the internet can, through algorithms, be used to almost read our minds means [that] those who have access to the algorithms and their databases have a vast opportunity to manipulate large population groups.
Our Economies are increasingly dominated by a tiny, very privileged and insulated portion of the population, largely reproduce inequality for their benefit. Criticism will be belittled and dismissed because of the veneer of digital ‘logic’ over the process.
I will always remain convinced the data will be used to enrich and/or protect others and not the individual. It’s the basic nature of the economic system in which we live.
Algorithms have the capability to shape individuals’ decisions without them even knowing it, giving those who have control of the algorithms an unfair position of power.
The overall impact of ubiquitous algorithms is presently incalculable because the presence of algorithms in everyday processes and transactions is now so great, and is mostly hidden from public view. Our algorithms are now redefining what we think, how we think and what we know.
We need to ask them to think about their thinking – to look out for pitfalls and inherent biases before those are baked in and harder to remove.
Should we be allowing ourselves to become so reliant on them – and who, if anyone, is policing their use?
Will the net overall effect of algorithms be positive for individuals and society or negative for individuals and society?
If every algorithm suddenly stopped working, it would be the end of the world as we know it.
We have already turned our world over to machine learning and algorithms.
The question now is, how to better understand and manage what we have done?
What are the implications of allowing commercial interests and governments to use algorithms to analyse our habits:
The main negative changes come down to a simple but now quite difficult question:
How can we see, and fully understand the implications of, the algorithms programmed into everyday actions and decisions?
The rub is this: Whose intelligence is it, anyway?
Algorithms are aimed at optimizing everything. Our lives will be increasingly affected by their inherent conclusions and the narratives they spawn.
By expanding collection and analysis of data and the resulting application of this information, a layer of intelligence or thinking manipulation is added to processes and objects that previously did not have that layer.
The internet runs on algorithms and all online searching is accomplished through them.
Email knows where to go thanks to algorithms. Smartphone apps are nothing but algorithms. Computer and video games are algorithmic storytelling. Online dating and book-recommendation and travel websites would not function without algorithms. GPS mapping systems get people from point A to point B via algorithms.
Artificial intelligence (AI) is nought but algorithms.
The material people see on social media is brought to them by algorithms.
In fact, everything people see and do on the web is a product of algorithms.
Every time someone sorts a column in a spreadsheet, algorithms are at play, and most financial transactions today are accomplished by algorithms. Algorithms help gadgets respond to voice commands, recognize faces, sort photos and build and drive cars. Hacking, cyberattacks and cryptographic code-breaking exploit algorithms.
In the future algorithms will write many if not most algorithms.
The rise of increasingly complex algorithms calls for critical thought about how to best prevent, deter and compensate for the harms that they cause …. Algorithmic regulation will require world government uniformity, expert judgment, political independence and pre-market review to prevent – without stifling innovation – the introduction of unacceptably dangerous algorithms into the market.
The usage of algorithms and analytics in society is exploding:
From machine learning recommender systems in commerce, to credit scoring methods outside of standard regulatory practice and self-driving cars.
We now spend so much of our time online that we are creating huge data-mining opportunities with algorithms programmed to look for “indirect, non-obvious” correlations in data and over time, if not already will create or exacerbate societal divides.
Algorithms are increasingly determining our collective futures. “Bank approvals, store cards, job matches and more. Google’s search algorithm is now a more closely guarded commercial secret than the recipe for Coca-Cola),
The problem is how the rules are set: it’s impossible to do this perfectly.
The questions being raised about algorithms at the moment are not about algorithms per se, but about the way, society is structured with regard to data use and data privacy.
Humans are seeing as causation when an algorithm identifies a correlation in vast swaths of data.
This transformation presents an entirely new menace: penalties based on propensities.
The possibility of using big-data predictions about people to judge and punish them even before they’ve acted. Doing this negates ideas of fairness, justice and free will.
Parole boards in more than half of all US states use predictions founded on data analysis as a factor in deciding whether to release somebody from prison or to keep him incarcerated.
We risk falling victim to a dictatorship of data, whereby we fetishise the information, the output of our analyses, and end up misusing it.
They can and will become an instrument of the powerful, who may turn it into a source of repression, either by simply frustrating customers and employees or, worse, by harming citizens.
The idea that the world’s financial markets – and, hence, the wellbeing of our pensions, shareholdings, savings etc – are now largely determined by algorithmic vagaries is unsettling enough for some. In currency trading, an algorithm lasts for about two weeks before it is stopped because it is surpassed by a new one.
We’re already halfway towards a world where algorithms run nearly everything.
As their power intensifies, wealth will concentrate on them.
Advances in quantum computing and the rapid evolution of AI and AI agents embedded in systems and devices in the Internet of Things will lead to hyper-stalking, influencing and shaping of voters, and hyper-personalized ads, and will create new ways to misrepresent reality and perpetuate falsehoods.
Climate change is becoming visible while the profits of the capitalist world are going underground thanks to Algorithms.
All human comments appreciated. All like clicks and abuse chucked in the bin.