( A seven-minute read)
I am sure you will agree with me when I state the obvious that the world has more than enough problems on its plate.
While we are all distracted there is not a day that goes by without some new App appearing.
( What lies behind our current rush to automate everything we can imagine?
Perhaps it is an idea that has leaked out into the general culture from cognitive
science and psychology over the past half-century — that our brains are
imperfect computers. If so, surely replacing them with actual computers can
have nothing but benefits. Yet even in fields where the algorithm’s job is a
relatively pure exercise in number- crunching, things can go alarmingly wrong.)
I HAVE JUST WATCHED ON TV THE FIRST PERSON WHO COULD BE DESCRIBE AS ALGORITHM SLAVE AND IT MADE ME WONDER.
One day, the makers of an algorithm-driven psychotherapy app could be sued by the survivors of someone to whom it gave the worst possible advice.
When we seek to hand over our decision-making to automatic routines in areas that have concrete social and political consequences, the results might be troubling indeed.
However, we are where we are with most of us unable to conduct our lives without our smartphones, the internet, and algorithms.
Is there still a place for human judgment?
Our age elevates the precision-tooled power of the algorithm over flawed human judgment.
From web search to marketing and stock-trading, and even education and policing, medical care, credit rating, the power of computers that crunch data according to complex sets of if-then rules is promised to make our lives better in every way.
Automated retailers tell you which book you want to read next; dating websites compute your perfect life-partner; self-driving cars will reduce accidents; crime will be predicted and prevented algorithmically.
If only we minimize the input of messy human minds, we can all have better decisions made for us. So runs the hard sell of our current algorithm fetish which is eroding our free will.
Automatic analysis of our smartphone geolocation, internet-browsing and
social-media data-trails grows ever more sophisticated, and so we can thin-
slice demographic categories ever more precisely.
From such information, it is possible to infer personal details (such as sexual
orientation or use of illegal drugs) that have not been explicitly supplied, and
sometimes to identify unique individuals. Even when such information is simply
used to target adverts more accurately, the consequences can be
So let me ask you.
How do algorithms decide exactly what should count as ‘hate speech’ or obscenity?
No one knows, because the company, quite understandably, isn’t going to give away its secrets. Rather than pursuing mere lexicographical analysis, such a system of automated pre-censorship is, making moral judgments.
We need to create a class of ‘algorithmic auditors’ — trusted representatives of
the public who can peer into the code to see what kinds of implicit political and
ethical judgments are buried there and report their findings back to us. This is a
good idea, though it poses practical problems about how companies can retain
the commercial edge provided by their computerized secret sauce if they have
to open up their algorithms to quasi-official scrutiny.
It is very unlikely that this will happen. We are however in danger of
App exploitation not only for profit but when there is no immediate cash peril –
culture, education, and crime.
We are well on the road to becoming slaves to the algorithms with computers
taking more than some tough choices out of our hands if we let them.
Such automated augury might be considered relatively harmless if its use is
confined to figuring out what products we might like to buy.
But it is not going to stop there.
There is so much out there that even the most popular human ‘curators’ cannot possibly keep on top of all of it.
If we erect algorithms as our ultimate judges and arbiters, we face the threat of difficulties not only in law-enforcement but also in culture.
Would it then be acceptable to deny people their freedom on such an algorithmic basis?
If you are feeling gloomy about the automation of higher education, the death of newspapers, and global warming, you might want to talk to someone — and there’s an algorithm for that, too. A new wave of smartphone apps with eccentric titular orthography (iStress, myinstantCOACH, MoodKit, BreakkUp) promise a psychotherapist in your pocket. Thus far they are not very intelligent and require the user to do most of the work — through this second drawback could be said of many human counselors too. Such apps hark back to one of the legendary milestones of ‘artificial intelligence’, the 1960s computer program called ELIZA.
Indeed, a backlash to algorithmic fetishism is already underway — at least in those areas where a dysfunctional algorithm’s effect is not some gradual and hard-to-measure social or cultural deterioration but an immediate difference to the bottom line of powerful financial organizations.
Are we all so brain dead that we are passively becoming technological slaves.
I now that there is little point in closing the gate when the cow has departed, but we got to start somewhere and soon if the next generation is to function as intelligent free people. It cannot be stopped.
At the moment there are a lot of dummy robots existence but if they acquire intentional desires what then. What happens when they can adjust those desires.
It’s too late. Scientist Fiction will be a lie that tells the truth.
At the moment they are no set of values for AI. Just write a little program and wait and see what happens.
HISTORY IS LITTERED WITH THE ANSWER, AND IT’S NOT GOOD.
It is time for the United Nations to establish A CLOUD STRONGROOM, WHERE ALL AI PROGRAMMES ARE REQUIRED TO DEPOSIT A COPY OF THE ORIGINAL PROGRAM OR ALGORITHM WHICH IS AVAILABLE TO ONE AND ALL.
ON DOING SO THE UN ISSUES AGAINST ITS FOUNDING CHARTER.
# A WORLD VALIDATION APPROVED LICENSE.
All human comments appreciate. All like clicks chucked in the bin.
Global registration, Accuracy is essential