Tags

, , , , , ,

 

 

(Ten-minute read) 

Algorithms in Decision Making.

 

We should be warier of their power. People don’t need to understand something to do it.   The algorithm does it for them.

They’re are increasingly determining our collective future.

We are already halfway towards a world where algorithms run everything.

With the current Pandemic, it is, not who will survive, but how and at what cost, not just to economic systems but to our hard-earned freedoms. 

Times will be rough as society tries to come up with an appropriate balance between who gets the jab. 

When algorithms involve machine learning, ( like track and trace ) they ‘learn’ the patterns from ‘training data’ which may be incomplete or unrepresentative of those who may be subsequently affected by the resulting algorithm.

Modern algorithm developers are focusing on creating algorithms that learn and develop with the data that they encounter. Machine learning is the next step that they are aiming for, with the algorithms deciding the input and output completely.

One of the world’s most used algorithms right now is the search engine algorithm of Google. It determines what people find in their internet searches and is the basis of the entire SEO industry, where people try to ensure that they show up in the top spot.

However, algorithms are much more prevalent than that- the Apple FaceID algorithm decides whether you are who you say you are.

Social Media algorithms tuned to your desires and want’s ensures that everything on your feed will be of interest to you without you knowing what data these algorithms use and what they aim for.

(Google’s search algorithm is more closely guarded  than classified secret documents) 

It is very convenient for people to follow the advice of algorithms if your high-frequency trading on the stock exchange but some algorithms limit people’s worldview, which can allow large population groups to be easily controlled.

This is why many of the issues raised in this post will require close monitoring, to ensure that the oversight of machine learning-driven algorithms continues to strike an appropriate and safe balance between recognizing the benefits (for healthcare and other public services, for example, and for innovation in the private sector) and the risks (for privacy and consent, data security and any unacceptable impacts on individuals).

Algorithms are being used in an ever-growing number of areas, in ever-increasing ways, however, like humans, they can produce bias in their results, even if unintentional.

They are bringing big changes in their wake; from better medical diagnoses to driverless cars, and within central governments where there are opportunities to make public services more effective and achieve long-term cost savings.

However, the Government should produce, publish, and maintain a list of where algorithms with significant impacts are being used within the Central Government, along with projects underway or planned for public service algorithms, to aid not just private sector involvement but also transparency.

Governments should not just simply accept what the developers of algorithms offer in return for data access.

This is now an urgent requirement because partnership deals are already being struck without the benefit of comprehensive national guidance for this evolving field.  

Given the international nature of digital innovation, governments, should establish audits of algorithms, introducing certification of algorithms, and charging ethics boards with oversight of algorithmic decisions.

Governments should identify a ministerial champion to provide government-wide oversight of such algorithms, where they are used by the public sector, and to co-ordinate departments’ approaches to the development and deployment of algorithms and partnerships with the private sector.

Transparency must be a key underpinning for algorithm accountability.

Why?

Because it will make it easier for the decisions produced by algorithms to be explained. 

(The ‘right to explanation’ is a key part of achieving accountability and tackling the ethical implications around AI.)

Why?

Because they are also moving into areas where the benefits to those applying them may not be matched by the benefits to those subject to their ‘decisions’—in some aspects of the criminal justice system, for example.

Because algorithms using social media datasets like ‘big data’ analytics, need data to be shared across previously unconnected areas, to find new patterns and new insights.

It’s not COVID-19 that will fuck us all its Profit-seeking algorithms.

All human comments appreciated. All like clicks and abuse chucked in the bin.

 

Advertisement