Tags

, , , , ,

 

 Twenty four-minute read.   

We have no idea how the world will look in twenty years never mind fifty when most of this generation will be in their seventies but it is now becoming clear beyond any doubt that AI and its algorithms are drastically changing the world we live in both for the good and bad.

There is one thing for certain artificial intelligence will have and is having a more profound effect than electricity or fire. 

It will not just hack our lives our brains, it will hack or very existence.

It might well warn us about climate change, the coronavirus but it will as it is manipulate our needs and wants and beliefs. It will effectively be controlling people for both commercial and political purposes. 

Given the force of this technology to have any control left we need meaningful regulations, if not we might as well just surrender to the algorithm, which is becoming so complex that no one will understand them.

The reality is that most of us are giving rivers of free information to Big data to an extent that we will soon be unable to think for ourselves.

If we don’t get a grip by the time you reach the seventies your futures and the future of the next generations will be decided at random by nonelected platforms.  

If this is so, the decision-making process for us all become a thing of the past.

The outlook for Ai is both grim and exciting. 

Already we see data collected affecting elections, with our ability to know what is fake and what is true at the mercy of Social Media run by algorithms.  

We all know their faces: Google, Microsoft, Amazon, Facebook, Apple, Baidu in China, Twitter, Alibaba, to name a few who are already transforming the basic structure of life.

Taken together they form a global oligopoly.

These unregulated platforms are competing for dominance all with a conflict of interests. Hence their algorithms.

The chances of self introducing regulations that will affect or stop their development is pie in the sky. 

Its time we stopped thinking about AI in kind of scientific terms.

Why?

Because algorithms are making critical decisions about our lives and the tech-driven approach to governance is growing. Because any particular scenario will be far from what we think is true today and we are running out of time to do anything about it.

If we are to call a spade a spade there is no or little understanding as to how to regulate these emerging technologies.  Even if there were governments and world institutions that could do so they are largely unequipped to create and enforce any meaningful regulation for the public benefit.

 The problem is how does one regulate an Algorithm that learns. 

You might happy ceding all authority to algorithms and their owners if so you don’t have to do anything they will take care of everything.

If not algorithms are watching you right now to ensure that you do not read this post and if you do read this post they will use one of the most powerful tools in their arsenal of big data – split and divide – False News and repetition thereof for example. 

It won’t happen overnight since development cycles often take years but our collective past will become a less reliable guide and we will have to adapt to the unknown. 

Unfortunately teaching the unknown without mental balance is a disaster in waiting. 

It might be easy now to laugh at this but unless we make our voices heard instead of like clicks it will not be climate change that changes us but race bias that is already programmed into Tec world.

We’re starting to see money examples where these algorithms are prone to the kinds of biases and limitations that we see in human decision making and increasingly we are moving towards algorithms that are learning more and more from data.

 I say, learning from this data almost institutionalizes the biases.

Why?

Because they are trying to personalize the media they curate for us. They’re trying to find for us more and more of the kinds of content that we already consume.  

So what if anything can be done?

Even if we do eventually introduce regulations they will have little effect unless we find a way of sharing the benefits of AI.

The problem is that our institutions, our education models are not able to keep up with the developments in Artificial Intelligence. We are becoming more and more detached from and in decision making, contributing, and rewards.

So our governments are leaving it to the market, to the big Tec companies themselves.

If you are expecting some kind of warning when AI finally get smarter than us then think again.

I say our algorithms are hanging out with the wrong data, profit for profit sake. 

In reality, our electronic overlords are just getting started with the smartphone, the I.Pads, Alex etc taking control.  We have to think about other measures, like is there a social contribution, and what is the impact of this algorithm on society?

This requires transparency.

But how do you create transparency in a world that is getting so complex?

Here is my solution.

Pharmaceuticals are considered as the most highly regulated industries worldwide and every country has its own regulatory authority when it comes to the drug development process.

(World Health Organization (WHO), Pan American Health Organization (PAHO), World Trade Organization (WTO), International Conference on Harmonization (ICH), World Intellectual Property Organization (WIPO) are some of the international regulatory agencies and organizations which play essential role in all aspects of pharmaceutical regulations related to drug product registration, manufacturing, distribution, price control, marketing, research and development, and intellectual property protection.)

Why not put in place a new World Governing Body to test and control Al algorithms. To act as a guardian of our Basic Human Values.

If this is not done it will remain impossible to truly cooperate with an AI or a corporation until such entities have values in the same sense that we do.

So:

All Companies already using algorithms should be legally required to submit the software programs running their algorithms for audit, by an independent team to ensure that our human values are complied with.

This audit could be done by a United Nation’s programme that is agreed to world wide.

The audit process (Because algorithms are constantly evolving as they gather more data.) has to be somewhat continuous like every ten years similar to Control technique. We might also need an algorithm to monitor the auditing algorithm to ensure it is not contaminated while it goes through its refresh-cycle of the Algorithm it is auditing. 

Then they must be made transparent with a certification of acceptable behaviour.   

Transparency for end users actually is very basic.

It’s not like an end-user wants to know the inner details of every algorithm we use.

But we would actually benefit knowing what’s going on at the high level.

For example, what kinds of data are being used by the algorithms to make decisions?

Recommend transparency measures.

Keeping in mind that these algorithms are being deployed and used by humans, and for humans, anyone impacted by decisions made by algorithms should have the right to a description of the data used to train them, and details as to how that data was collected.

The public, have little understanding or access to information about how governments are using data, much of it collected quietly, to feed the algorithms that make decisions about everyday life. And, in an uncomfortable twist, the government agencies themselves often do not fully understand how algorithms influence their decisions.

Having more and more data alone will not solve the problems, gender bias, race bias.

Perhaps the notion of control may only be an illusion.

It won’t be long before they are latching on to life forms.  For example, there’s a type of machine learning algorithm known as neuro labs, and these are modelled on the human brain. What’s happening in these algorithms is they’ve taken lots of data, and they learn how to make decisions like humans have made.

I think this field hasn’t yet emerged.

Humans aren’t changing that much. But the algorithms, the way they’re created, the technological side of it, continues to change, continues to evolve. And trying to keep those things in sync seems to be the greatest challenge.

In a world where algorithms are deciding who gets what, how machine decisions are made, and how the two, can work together.

Because we are going to use these systems so much that we have to understand them at a deeper level, and we can’t be passive about it anymore because the consequences are very significant, whether we’re talking about a democracy or you know, I’m curating news stories for citizens, or we talking about use by doctors in medicine, or used in the courtroom, and so on.

It is going to be extremely important as we roll out algorithms in more and more important settings going forward we start understanding what drives trust in these machines. Understanding what are some socially important outcomes of interest, so that we order these algorithms against these socially important outcomes, like fairness and so on.

Given everything we know about the world and indeed the universe as a whole does anyone seriously believe that nationalism and popularism will help us with this technological problem. 

Let’s talk about what data are collected about us.

It is far too late to be talking about privacy that is what gets abused.

Let’s fight against everything that we can control that limits our freedom. Whether it’s an algorithm, hungry judge or greedy state backed the wrong econometric model…

We need to rethink how we do education, we have to rethink how we do regulation, and firms also need to stand up and do a better job of auditing and taking responsibility as well.

Of course, none of this will happen.

Humans are more likely to be divided between those who favour giving Algorithms and Ai significant authority to make decisions and those opposed to it with both justifying whichever position while Algorithmic logic drives greed and inequality to a point where we will lose control of transparency completely.

To stay relevant as Yuval Noha Harari says in his Book 21 Lessons for the 21st Century “we will need to be asking the questions -how am I where am I.”

There testing rarely go beyond technical measures which are causing society to become more polarized making it more unlikely that we can appreciate other viewpoints.

Just knowing that an algorithm made the decision would be a good place to start.

Was an algorithm used?

If so who does it belong to?

What kinds of data did the algorithm used?

Today, algorithms and artificial intelligence are creating a much more frightening world. High-frequency trading algorithms already rule the stock exchanges of the world.

Personally, I would neither overestimate nor underestimate the role and threat of algorithms. Behind every smart web service is some smarter web code.

So we need to make sure their design is not only informed by knowledge of the human users, but the knowledge of their design is also suitably conveyed to human users so we don’t eliminate the human from the loop completely.

If not they will become a black box, even to the engineer.

All our lives are constrained by limited space and time, limits that give rise to a particular set of problems that are being exploited by profit-seeking algorithms. 

There’s so much data out there to be analyzed. And right now it’s just sitting there not doing anything. So maybe we can come up with a solution that will at least get us started on it.

It is a fascinating topic because there are so many variables involved.

All human comments appreciated. All like clicks and abuse chucked in the bin. 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Advertisement