Tags

, , , , , ,

( Two Minute read)

We all know that history is plagued with falsehood and lies mainly told by the victors but now we have new liers on the block that are so perfect at telling them you wonder if anything is true.

They are a powerful amplifier of social, economic and cultural inequalities currently forcing us to confront the kind of society we have created.

Algorithms will force us to recognize how the outcomes of past social and political conflicts have been perpetuated into the present through our use of data.

The question now is whether we will use these revelations to create a more just society.

For 4bn years life on Earth evolved according to the laws of natural selection and organic chemistry. Now science is about to usher in the era of non-organic life evolving by intelligent design, and such life may well eventually leave Earth to spread throughout the galaxy.

Artificial intelligence will probably be the most important agent of change in the 21st century. The choices we make today may have a profound impact on the trajectory of life for countless millennia and far beyond our own planet.

That demand for clarity is making it harder to ignore the structural sources of societal inequities.

The question in the near future will be whether larger groups of people will be able to tell reality from fiction, or whether technological authentication of media will become completely necessary to trust anything online.

So when will it makes sense for an AI to lie to a person?

It’s entirely possible that a robot may need to misrepresent some things in order to preserve itself.

As algorithmic decision-making spreads across a broadening range of policy areas, it is beginning to expose social and economic inequities that were long hidden behind “official” data.

In order for AIs to lie effectively, they’re going to have to develop what’s called a “theory of mind.” That means they’ll have to guess what you, the user believes, and also predict how you will react when given any particular set of information (whether that information is true or not).

Disinformation powered by AI is already rampant – Donald Trump election, Brexit, Popularism.

So are we OK with lying to an AI and, likewise, OK with being lied to by an AI?

Fake reports and videos.Bots.Algorithmic curation. Targeted behavioural marketing powered by algorithms and machine learning.

I for one would like to live in a society whose systems are built on top of verifiable, rigorous, thorough knowledge, and not on the alchemy of machine learning

(A machine-learning system is a bundle of algorithms that take in torrents of data at one end and spit out inferences, correlations, recommendations and possibly even decisions at the other end.)

I can’t explain the inner workings of their mathematical models: they themselves lack rigorous theoretical understandings of their tools and in that sense are currently operating in alchemical rather than scientific mode.

They encourage hypnotised wonderment and they disable our critical faculties.10 of the Biggest Lies Told About Bitcoin

If we don’t take some action the future of life on Earth will be decided by small-time politicians spreading fears about terrorist threats, by shareholders worried about quarterly revenues and by marketing experts trying to maximise customer experience.

Hopefully, unchecked flaws in algorithms and even the technology itself should put a brake on the escalating use of big data.

We need such systems themselves “understand” enough to avoid deception.

There will be no point in a Machine learning life returning to earth if we are unable to know what it experienced is true.

All human comments appreciated. All like clicks and abuse chucked in the bin.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Advertisements