( Twenty minute read)

ADVANCING MORE RAPIDLY THAN ANY OTHER INNOVATION IN OUR HISTORY THE PROBLEMS WITH TECHOLOGY ARE BECOMING CLEARER BY THE DAY.  

The sociological/psychological fallout of AI is not decades away: right here, right now, we are watching in slow-motion the major meltdown of our shared sense of reality.

The discovery that AI can treat language as probability, and from there on, most anything as a “language” of sorts:

DNA sequences, yep, robotics and motoric learning, yes actually, music, definitively, generation and recognition of images and sounds, yes, hacking and cryptography, also yes, persuasion, yes of course…

It is that’s term or concept of Augmented Intelligence which implies a replacement of human intelligence that is becoming less threatening than the admittedly ominous-sounding ‘artificial intelligence’.  It will be increasingly hard to know what is real and what is not. It will be hard to resist manipulation and persuasion. It will be hard to know where we begin and the agency of the machine ends.

It will be increasingly hard not to lose our minds as our shared sense of self and reality (our sociality, which we rely upon for our sanity) fractures. The scale and effect of this fact, in its sociological and psychological ramifications (not to mention economic and political ones) is in itself a rollercoaster ride of Nietzschean proportions.

Even if we remain agnostic about Nick Bostrom’s existential risk superintelligence general AI, we can be fairly certain that we have a sociological moment of impact starting more or less yesterday.

There’s just no way new capacities of this magnitude come about with this kind of speed, and then everything just goes back to normal. For all we know, normal might never happen again.

——-

By definition, the word symbiosis is a term commonly used in biology to describe the relationship between two different organisms that live together in close association, and both partners benefit from the relationship.

TO A CERTAIN EXTENT THIS IS WHAT IS HAPPENING WITH THE APPLICATION OF TECHNOLOGY TO OUR LIVES, AS MANY PROBLEMS SIMPLY CANNOT BE FROMULATED OR RESOLVED WITHOUT THE HELP OF COMPUTERS.

AN OTHER WORDS:

The computers should be acting as a “serving agent,” providing the human with the information they needed to make informed decisions.

Recognized that the human mind have limitations, such as limited memory capacity and the inability to perform complex calculations quickly. Computers, on the other hand, have almost unlimited memory capacity and perform calculations at incredible speeds. By working together, humans and computers could overcome each other’s limitations and achieve a level of productivity that was not possible before.

Computers are now responsible for performing complex computations and storing vast amounts of information, while humans are be removed from making judgements and decisions based on the information provided by the computer.

One of the main objectives with the growing scale and complexity of information processing tasks of human-computer symbiosis is to bring the computer effectively into the formative parts of technical problems.

‘The question is not ‘What is the answer?’ The question is ‘What is the question?

THAT QUESTION BECOMES.   CAN WE LIVE DIGTILIZED LIVES?

Nowadays, many intelligent systems work in a symbiotic relationship with humans.

For example,

Every time we rate a movie on Netflix, we are helping artificial intelligence understand our behaviour and in the future recommend movies based on our preferences.

In the financial industry, computers are widely used to process large amounts of data in real-time, identify trading patterns, and make accurate predictions. Financial analysts rely on computational analysis to buy and sell stocks or make risky financial moves.

The market place, and its movements, which affect global economies are run by Algorithms for profit.

In the e-commerce sites we use, AI can make inferences and anticipate some tasks based on our shopping lists and recommend products. This is also a symbiotic and collaborative relationship.

Grammarly is another example of how the symbiosis between humans and computers is being utilized in the writing field. Through its AI technology, the tool is capable of suggesting real-time grammatical and spelling corrections, and the user’s experience when using it can contribute to the evolution of their ability to improve their vocabulary, as the tool offers suggestions for synonyms and more suitable word choices.

There are hundreds of other examples, but in order to build computational systems that adapt to human needs, we also need to understand how these intelligent systems work and execute tasks, and this is increasingly becoming impossible with machine learning.

Looking at the current context, the human-computer symbiosis becomes increasingly important as interactions between humans and machines become more frequent and complex, enabling users to interact with technology in a natural way.

As our interaction with intelligent systems increases every day, the principles of human-computer symbiosis are also central to living a life.


Defining the term “Augmented Intelligence” can be quite challenging since there are many definitions.

Different researchers and practitioners tend to define it in their own unique way.

I define it as the erosion of our ability to reason for ourselves.

AI-enabled frontier technologies are helping to save lives, diagnose diseases and extend life expectancy. In education, virtual learning environments and distance learning have opened up programmes to students who would otherwise be excluded. Public services are also becoming more accessible and accountable through blockchain-powered systems, and less bureaucratically burdensome as a result of AI assistance. Big data can also support more responsive and accurate policies and programmes.

The use of algorithms can replicate and even amplify human and systemic bias where they function on the basis of data which is not adequately diverse. Lack of diversity in the technology sector can mean that this challenge is not adequately addressed.

Today, digital technologies such as data pooling and AI are used to track and diagnose issues in agriculture, health, and the environment, or to perform daily tasks such as navigating traffic or paying a bill.

They can be used to defend and exercise human rights – but they can also be used to violate them, for example, by monitoring our movements, purchases, conversations and behaviours. Governments and businesses increasingly have the tools to mine and exploit data for financial and other purposes.

Data-powered technology has the potential to empower individuals, improve human welfare, and promote universal rights, depending on the type of protections put in place.

Social media connects almost half of the entire global population.

It enables people to make their voices heard and to talk to people across the world in real time. However, it can also reinforce prejudices and sow discord, by giving hate speech and misinformation a platform, or by amplifying echo chambers.

How to manage these developments is the subject of much discussion – nationally and internationally – at a time when geopolitical tensions are on the rise. This war of information is becoming so important that it can influence democracy and the opinion of people before the vote in an election for instance.

We’re now on the verge, as a society, of appropriately recognizing the need to respect privacy in our Web 2.0 world.A man with freedom taped over his mouth.

In a world where everyone has an opinion and, more importantly, where everyone has the ability, if they choose, to share it with the rest of the world, one person’s hate speech can sometimes be another’s right to free speech.

Social media companies need to take “more responsibility” for what is on their platforms. There has to be a reckoning for what social media is making available [online].

IRELAND IS THE FIRST COUNTRY TO ADDRESS THE ABOVE.

As things stand in Ireland, hate speech is defined as any communication in public intended or likely to be threatening or abusive, and likely to stir up hatred against a person due to their race, colour, nationality, religion, ethnicity, Traveller origins, and/or sexual orientation. The proposed law will also make it an offence to deny or trivialise genocide. It will define a hate crime as any criminal offence which is perceived by the victim, or any other person, to have been motivated by prejudice.

The new legislation will criminalise any intentional or reckless communication or behaviour that is likely to incite violence or hatred against a person or persons because they are associated with a protected characteristic. The penalty for this offence will be up to five years’ imprisonment.

The provisions of the new legislation have been crafted to ensure that they will capture hate speech in an online context.

THIS IS THE FIRST SMALL STEP IN THE RIGHT DIRECTION TO COMBAT UNREGULATED TECHNOLOGY.

For most of the past decade, public concerns about digital technology have focused on the potential abuse of personal data.

This debate is NOW entering a new phase.

As companies increasingly embed artificial intelligence in their products, services, processes, and decision-making, attention is shifting to how data is used by the software—particularly by complex, evolving algorithms that might diagnose a cancer, drive a car, or approve a loan.

The problem crops up in many other guises:

For instance, in ubiquitous online advertisement algorithms, which may target viewers by race, religion, or gender.

Software used by leading hospitals exhibit significant racial bias to prioritize recipients of kidney transplants discriminated against Black patients.

In dealing with biased outcomes, regulators have mostly fallen back on standard antidiscrimination legislation.

That’s workable as long as there are people who can be held responsible for problematic decisions. But with AI increasingly in the mix, individual accountability is undermined.

Some algorithms make or affect decisions with direct and important consequences on people’s lives.

They diagnose medical conditions, for instance, screen candidates for jobs, approve home loans, or recommend jail sentences. In such circumstances it may be wise to avoid using AI or at least subordinate it to human judgment. Using AI could therefore increase human decision-makers’ accountability, which might make people likely to defer to the algorithms more often than they should.

The degree of trust in AI varies with the kind of decisions it’s used for. When a task is perceived as relatively mechanical and bounded—think optimizing a timetable or analysing images—software is regarded as at least as trustworthy as humans.

But when decisions are thought to be subjective or the variables change (as in legal sentencing, where offenders’ extenuating circumstances may differ), human judgment is trusted more, in part because of people’s capacity for empathy. This suggests that companies need to communicate very carefully about the specific nature and scope of decisions they’re applying AI to and why it’s preferable to human judgment in those situations. For example, in machine diagnoses of medical scans, people can easily accept the advantage that software trained on billions of well-defined data points has over humans, who can process only a few thousand.

On the other hand, applying AI to make a diagnosis regarding mental health, where factors may be behavioural, hard to define, and case-specific, would probably be inappropriate. It’s difficult for people to accept that machines can process highly contextual situations. And even when the critical variables have been accurately identified, the way they differ across populations often isn’t fully understood—which brings us to the next factor.

An algorithm may not be fair across all geographies and markets.

Just like human judgment, AI isn’t infallible. Algorithms will inevitably make some unfair—or even unsafe—decisions.

The right…to obtain an explanation of the decision reached” by algorithms, MUST BE ENSHRINNED IN LAW.

But what does it mean to get an explanation for automated decisions, for which our knowledge of cause and effect is often incomplete?

Should we require—and can we even expect—AI to explain its decisions?

However, most people lack the advanced skills in mathematics or computer science needed to understand, let alone determine whether the relationships specified in it are appropriate. And in the case of machine learning—where AI software creates algorithms to describe apparent relationships between variables in the training data—flaws or biases in that data, not the algorithm, may be the ultimate cause of any problem.

If AI starts to listen to us and adapt to our every move, we can only “win” by mirroring it, and being equally attentive to it, even to the point of treating this wild piece of silicon clockwork as though it were alive.

Because, in the end, when we don’t know where we begin and AI ends, then AI is essentially as alive as anything.

AI can only exist because it feeds on human civilization and knowledge coded into text and other digestible data — but humans are in turn subjected to the power of AI, and thus deeply reshaped by it, because AI coordinates more data than we can and knows, well, us. The AI soon knows us better than we can know it, or even know ourselves.

If we seek relational proportionality and resonance across the AI-humanity axis, we must of course also feed this intra-action with socially proportional perspectives, i.e. with social justice.

Our very civilizational sanity and survival depend upon balancing the informational diet of the AI, so that it can itself produce emergent patterns that resonate through and across societies… But the Internet is roughly as skewed and distorted as the power relations of global humanity at large.

It acts on the whole with great efficiency and speed, but it cannot speak for the whole.

What you can expect is increasing dissonance, a spiralling insanity, as the “human-AI-AI-human intra-action” system disconnects from the rest of reality, from the larger scheme that contains the actual multiplicity of the world’s perspectives.

If we don’t want to spiral into virtual madness with real social consequences, we need to balance out the reality projected into the digital realm: the encoded information. It mean’s that AI itself must be used to more proportionally and correctly represent the lives, experiences, and embodied — or intellectual — knowledge of the world.

In short, if we apply the AI to balancing out human-perspectives-as-projected-onto-the-web-as-data, not only can we get a more just and sane society; we can also help to retain an AI that remains on the sane and just side in the first place.

Or, yet more succinctly: A sane AI is also a social justice AI, but one that dodges the traps of present-day social justice and intersectionality discourses.

Let me underscore: If we fail to do this, we instead unleash AI powers that widen social gaps and fracture knowledge systems into different continents where people become entirely unable to comprehend one another, leading to social and psychological decay.

If we want a world that is not driven by digitalization ,THE TIME IS NOW TO DO SOME THING ABOUT IT.

All human comments appreciated. All like clicks and abuse chucked in the bin.

Contact: bobdillon33@gmail.com

Advertisement