( Ten minute read)
I am sure that unless you have being living on another planet it is becoming more and more obvious that the manner you live your life is being manipulate and influence by technologies.
So its worth pausing to ask why the use of AI for algorithm-informed decision is desirable, and hence worth our collective effort to think through and get right.
A huge amount of our lives – from what appears in our social media feeds to what route our sat-nav tells us to take – is influenced by algorithms. Email knows where to go thanks to algorithms. Smartphone apps are nothing but algorithms. Computer and video games are algorithmic storytelling. Online dating and book-recommendation and travel websites would not function without algorithms.
Artificial intelligence (AI) is naught but algorithms.
The material people see on social media is brought to them by algorithms. In fact, everything people see and do on the web is a product of algorithms. Algorithms are also at play, with most financial transactions today accomplished by algorithms. Algorithms help gadgets respond to voice commands, recognize faces, sort photos and build and drive cars. Hacking, cyberattacks and cryptographic code-breaking exploit algorithms.
Algorithms are aimed at optimizing everything.
Self-learning and self-programming algorithms are now emerging, so it is possible that in the future algorithms will write many if not most algorithms.
Yes they can save lives, make things easier and conquer chaos, but when it comes both the commercial/ social world, there are many good reasons to question the use of Algorithms.
Why?
They can put too much control in the hands of corporations and governments, perpetuate bias, create filter bubbles, cut choices, creativity and serendipity, while exploiting not just of you, but the very resources of our planet for short-term profits, destroying what left of democracy societies, turning warfare into face recognition, stimulating inequality, invading our private lives, determining our futures without any legal restrictions or transparency, or recourse.
The rapid evolution of AI and AI agents embedded in systems and devices in the Internet of Things will lead to hyper-stalking, influencing and shaping of voters, and hyper-personalized ads, and will create new ways to misrepresent reality and perpetuate falsehoods.
———
As they are self learning, the problem is who or what is creating them, who owns these algorithms and what if there should be any controls in their usage.
Lets ask some questions that need to be ask now not later concerning them.
1) The outcomes the algorithm intended to make possible (and whether they are ethical)
2) The algorithm’s function.
3) The algorithm’s limitations and biases.
4) The actions that will be taken to mitigate the algorithm’s limitations and biases.
5) The layer of accountability and transparency that will be put in place around it.
There is no debate about the need for algorithms in scientific research – such as discovering new drugs to tackle new or old diseases/ pandemics, space travel, etc.
Out side of these needs the promise of AI is that we could have evidence-based decision making in the field:
Helping frontline workers make more informed decisions in the moments when it matters most, based on an intelligent analysis of what is known to work. If used thoughtfully and with care, algorithms could provide evidence-based policymaking, but they will fail to achieve much if poor decisions are taken at the front.
However, it’s all well and good for politicians and policymakers to use evidence at a macro level when designing a policy but the real effectiveness of each public sector organisation is now the sum total of thousands of little decisions made by algorithms each and every day.
First (to repeat a point made above), with new technologies we may need to set a higher bar initially in order to build confidence and test the real risks and benefits before we adopt a more relaxed approach. Put simply, we need time to see in what ways using AI is, in fact, the same or different to traditional decision making processes.
The second concerns accountability. For reasons that may not be entirely rational, we tend to prefer a human-made decision. The process that a person follows in their head may be flawed and biased, but we feel we have a point of accountability and recourse which does not exist (at least not automatically) with a machine.
The third is that some forms of algorithmic decision making could end up being truly game-changing in terms of the complexity of the decision making process. Just as some financial analysts eventually failed to understand the CDOs they had collectively created before 2008, it might be too hard to trace back how a given decision was reached when unlimited amounts of data contribute to its output.
The fourth is the potential scale at which decisions could be deployed. One of the chief benefits of technology is its ability to roll out solutions at massive scale. By the same trait it can also cause damage at scale.
In all of this it’s important to remember that while progress isn’t guaranteed transformational progress on a global scale normally takes time, generations even, to achieve but we pulled it off in less than a decade and spent another decade pushing the limits of what was possible with a computer and an Internet connection and, unfortunately, we are beginning running into limits pretty quickly such as.
No one wants to accept that the incredible technological ride we’ve enjoyed for the past half-century is coming to an end, but unless algorithms are found that can provide a shortcut around this rate of growth, we have to look beyond the classical computer if we are to maintain our current pace of technological progress.
A silicon computer chip is a physical material, so it is governed by the laws of physics, chemistry, and engineering.
After miniaturizing the transistor on an integrated circuit to a nanoscopic scale, transistors just can’t keep getting smaller every two years. With billions of electronic components etched into a solid, square wafer of silicon no more than 2 inches wide, you could count the number of atoms that make up the individual transistors.
So the era of classical computing is coming to an end, with scientists anticipating the arrival of quantum computing designing ambitious quantum algorithms that tackle maths greatest challenges an Algorithm for everything.
———–
Algorithms may be deployed without any human oversight leading to actions that could cause harm and which lack any accountability.
The issues the public sector deals with tend to be messy and complicated, requiring ethical judgements as well as quantitative assessments. Those decisions in turn can have significant impacts on individuals’ lives. We should therefore primarily be aiming for intelligent use of algorithm-informed decision making by humans.
If we are to have a ‘human in the loop’, it’s not ok for the public sector to become littered with algorithmic black boxes whose operations are essentially unknowable to those expected to use them.
As with all ‘smart’ new technologies, we need to ensure algorithmic decision making tools are not deployed in dumb processes, or create any expectation that we diminish the professionalism with which they are used.
Algorithms could help remove or reduce the impact of these flaws.
So where are we.
At the moment modern algorithms are some of the most important solutions to problems currently powering the world’s most widely used systems.
Here are a few. They form the foundation on which data structures and more advanced algorithms are built.
Google’s PageRank algorithm is a great place to start, since it helped turn Google into the internet giant it is today.
The PageRank algorithm so thoroughly established Google’s dominance as the only search engine that mattered that the word Google officially became a verb less than eight years after the company was founded. Even though PageRank is now only one of about 200 measures Google uses to rank a web page for a given query, this algorithm is still an essential driving force behind its search engine.
The Key Exchange Encryption algorithm does the seemingly impo
Backpropagation through a neural network is one of the most important algorithms invented in the last 50 years.
Neural networks operate by feeding input data into a network of nodes which have connections to the next layer of nodes, and different weights associated with these connections which determines whether to pass the information it receives through that connection to the next layer of nodes. When the information passed through the various so-called “hidden” layers of the network and comes to the output layer, these are usually different choices about what the neural network believes the input was. If it was fed an image of a dog, it might have the options dog, cat, mouse, and human infant. It will have a probability for each of these and the highest probability is chosen as the answer.
Without backpropagation, deep-learning neural networks wouldn’t work, and without these neural networks, we wouldn’t have the rapid advances in artificial intelligence that we’ve seen in the last decade.
Routing Protocol Algorithm (LSRPA) are the two most essential algorithms we use every day as they efficiently route data.
The two most widely used by the Internet, the Distance-Vector Routing Protocol Algorithm (DVRPA) and the Link-State traffic between the billions of connected networks that make up the Internet.
Compression is everywhere, and it is essential to the efficient transmission and storage of information.
Its made possible by establishing a single, shared mathematical secret between two parties, who don’t even know each other, and is used to encrypt the data as well as decrypt it, all over a public network and without anyone else being able to figure out the secret.
Searches and Sorts are a special form of algorithm in that there are many very different techniques used to sort a data set or to search for a specific value within one, and no single one is better than another all of the time. The quicksort algorithm might be better than the merge sort algorithm if memory is a factor, but if memory is not an issue, merge sort can sometimes be faster;
One of the most widely used algorithms in the world, but in that 20 minutes in 1959, Dijkstra enabled everything from GPS routing on our phones, to signal routing through telecommunication networks, and any number of time-sensitive logistics challenges like shipping a package across country. As a search algorithm, Dijkstra’s Shortest Path stands out more than the others just for the enormity of the technology that relies on it.
——–
At the moment there are relatively few instances where algorithms should be deployed without any human oversight or ability to intervene before the action resulting from the algorithm is initiated.
The assumptions on which an algorithm is based may be broadly correct, but in areas of any complexity (and which public sector contexts aren’t complex?) they will at best be incomplete.
Why?
Because the code of algorithms may be unviewable in systems that are proprietary or outsourced.
Even if viewable, the code may be essentially uncheckable if it’s highly complex; where the code continuously changes based on live data; or where the use of neural networks means that there is no single ‘point of decision making’ to view.
Virtually all algorithms contain some limitations and biases, based on the limitations and biases of the data on which they are trained.
Though there is currently much debate about the biases and limitations of artificial intelligence, there are well known biases and limitations in human reasoning, too. The entire field of behavioural science exists precisely because humans are not perfectly rational creatures but have predictable biases in their thinking.
Some are calling this the Age of Algorithms and predicting that the future of algorithms is tied to machine learning and deep learning that will get better and better at an ever-faster pace. There is something on the other side of the classical-post-classical divide, it’s likely to be far more massive than it looks from over here, and any prediction about what we’ll find once we pass through it is as good as anyone else’s.
It is entirely possible that before we see any of this, humanity will end up bombing itself into a new dark age that takes thousands of years to recover from.
The entire field of theoretical computer science is all about trying to find the most efficient algorithm for a given problem. The essential job of a theoretical computer scientist is to find efficient algorithms for problems and the most difficult of these problems aren’t just academic; they are at the very core of some of the most challenging real world scenarios that play out every day.
Quantum computing is a subject that a lot of people, myself included, have gotten wrong in the past and there are those who caution against putting too much faith in a quantum computer’s ability to free us from the computational dead end we’re stuck in.
The most critical of these is the problem of optimization:
How do we find the best solution to a problem when we have a seemingly infinite number of possible solutions?
While it can be fun to speculate about specific advances, what will ultimately matter much more than any one advance will be the synergies produced by these different advances working together.
Synergies are famously greater than the sum of their parts, but what does that mean when your parts are blockchain, 5G networks, quantum computers, and advanced artificial intelligence?
DNA computing, however, harnesses these amino acids’ ability to build and assemble itself into long strands of DNA.
It’s why we can say that quantum computing won’t just be transformative, humanity is genuinely approaching nothing short of a technological event horizon.
Quantum computers will only give you a single output, either a value or a resulting quantum state, so their utility solving problems with exponential or factorial time complexity will depend entirely on the algorithm used.
One inefficient algorithm could have kneecapped the Internet before it really got going.
It is now oblivious that there is no going back.
The question now is there anyway of curtailing their power.
This can now only be achieved with the creation of an open source platform where the users control their data rather than it being used and mined. (The uses can sell their data if the want.)
This platform must be owned by the public, and compete against the existing platforms like face book, twitter, what’s App, etc, protected by an algorithm that protects the common values of all our lives – the truth.
Of course it could be designed by using existing algorithms which would defeat its purpose.
It would be an open net-work of people a kind of planetary mind that has to always be funding biosphere-friendly activities.
A safe harbour perhaps called the New horizon. A digital United nations where the voices of cooperation could be heard.
So if by any chance there is a human genius designer out there that could make such a platform he might change the future of all our digitalized lives for the better.
All human comments appreciated. All like clicks and abuse chucked in the bin.
Contact: bobdillon33@gmail.com