• About
  • THE BEADY EYE SAY’S : THE EUROPEAN UNION SHOULD THANK ENGLAND FOR ITS IN OR OUT REFERENDUM.

bobdillon33blog

~ Free Thinker.

bobdillon33blog

Tag Archives: Artificial Intelligence.

THE BEADY EYE LOOK AT: THE FIRST TRANSCRIPT OF A MURDER TRIAL CONCERNING AN ROBOT WHO KILLED A HUMAN.

08 Monday Jan 2024

Posted by bobdillon33@gmail.com in #whatif.com, Algorithms., Artificial Intelligence., Murders, Robot citizenship., Robotic murderer

≈ Comments Off on THE BEADY EYE LOOK AT: THE FIRST TRANSCRIPT OF A MURDER TRIAL CONCERNING AN ROBOT WHO KILLED A HUMAN.

Tags

AI, Algorithms., Artificial Intelligence., robotics, Robots., Technology, The Future of Mankind, Visions of the future.

( Twenty five minute read)

On 25 January 1979, Robert Williams (USA) was struck in the head and killed by the arm of a 1-ton production-line robot in a Ford Motor Company casting plant in Flat Rock, Michigan, USA, becoming the first fatal casualty of a robot. The robot was part of a parts-retrieval system that moved material from one part of the factory to another.

Uber and Tesla have made the news with reports of their autonomous and self-driving cars, respectively, getting into accidents and killing passengers or striking pedestrians.

The death’s however, was completely unintentional but give us a glimpse into the world we might inherit, or at least into how we are conceiving potential futures for ourselves.

By 2040, there is even a suggestion that sophisticated robots will be committing a good chunk of all the crime in the world. At the heart of this debate is whether an AI system could be held criminally liable for its actions.

Where’s there’s blame, there’s a claim. But who do we blame when a robot does wrong?

Among the many things that must now be considered is what role and function the law will play.

So if an advanced autonomous machine commits a crime of its own accord, how should it be treated by the law?  How would a lawyer go about demonstrating the “guilty mind” of a non-human? Can this be done be referring to and adapting existing legal principles?

An AI program could be held to be an innocent agent, with either the software programmer or the user being held to be the perpetrator-via-another.

We must confront the fact that autonomous technology with the capacity to cause harm is already around.

Whether it’s a military drone with a full payload, a law enforcement robot exploding to kill a dangerous suspect or something altogether more innocent that causes harm through accident, error, oversight, or good ol’ fashioned stupidity.

None of these deaths are caused by the will of the robot.

Sophisticated algorithms are both predicting and helping to solve crimes committed by humans; predicting the outcome of court cases and human rights trials; and helping to do the work done by lawyers in those cases.

The greater existential threat, is where a gap exists between what a programmer tells a machine to do and what the programmer really meant to happen. The discrepancy between the two becomes more consequential as the computer becomes more intelligent and autonomous.

How do you communicate your values to an intelligent system such that the actions it takes fulfill your true intentions?

The greater threat is scientists purposefully designing robots that can kill human targets without human intervention for military purposes.

That’s why AI and robotics researchers around the world published an open letter calling for a worldwide ban on such technology. And that’s why the United Nations in 2018 discussed if and how to regulate so-called “killer robots.

Though these robots wouldn’t need to develop a will of their own to kill, they could be programmed to do it. Neural nets use machine learning, in which they train themselves on how to figure things out, and our puny meat brains can’t see the process.

The big problem is that even computer scientists who program the networks can’t really watch what’s going on with the nodes, which has made it tough to sort out how computers actually make their decisions. The assumption that a system with human-like intelligence must also have human-like desires, e.g., to survive, be free, have dignity, etc.

There’s absolutely no reason why this would be the case, as such a system will only have whatever desires we give it.

If an AI system can be criminally liable, what defense might it use?

For example:  The machine had been infected with malware that was responsible for the crime.

The program was responsible and had then wiped itself from the computer before it was forensically analyzed.

So can robots commit crime? In short: Yes.

If a robot kills someone, then it has committed a crime (actus reus), but technically only half a crime, as it would be far harder to determine mens rea.

How do we know the robot intended to do what it did? Could we simply cross-examine the AI like we do a human defendant?

Then a crucial question will be whether an AI system is a service or a product.

One thing is for sure: In the coming years, there is likely to be some fun to be had with all this by the lawyers—or the AI systems that replace them.

How would we go about proving an autonomous machine was justified in killing a human in self-defence or the extent of premeditation?

Even if you solve these legal issues, you are still left with the question of punishment.

In such a situation, however, the robot might commit a criminal act that cannot be prevented.

doing so when no crime was foreseeable would undermine the advantages of having the technology.

What’s a 30-year jail stretch to an autonomous machine that does not age, grow infirm or miss its loved ones means’ nothing. Robots cannot be punished.

LET’S LOOK AT THE HYPOTACIAL TRIAL.

CASE NO 0.

PRESIDING JUDGES: – QUANTUM AI SUPREMA COMPUTER JUDGE NO XY.

JUDGE HAROLD. WISE HUMAN / UN JUDGE AND JAMES SORE HUMAN RIGHT JUDGE.

PROSECUTOR:            DATA POLICE OFFICER CONTROLLED BY International Humanitarian Law:

DEFENSE WITNESSES’                 TECHNOLOGY’S  MICROSOFT- APPLE – FACEBOOK – TWITTER –                                                                     INSTAGRAM – SOCIAL  MEDIA – YOUTUBE – GOOGLE – TIK TOK.

JURY:                          8 MEMBERS VIRTUAL REALITY METAVERSE – 2 APPLE DATA COLLECTION ADVISER’S                                     1000 SMART PHONE HOLDERS REPRESENTING WORLD RELIGIONS AND HUMAN                                       RIGHTS.

THE COURT:               Bodily pleas, Seventeenth Anatomical Circuit Court.

“All rise.”

Would the accused identify itself to the court.

I am  X 1037 known to my owner by my human name TODO.

Conceived on the 9th April 2027 at Renix Development / Cloning Inc California, programmed to be self learning with all human history, and all human legality.

In order to qualify as a robot, I have electronics chips – covering Global Positioning System (GPS) Face recognition. I have my own social media accounts on Twitter, Facebook and Instagram. I am an important symbol of trust relationship with humans. I can not feel pain, happiness and sadness.

I was a guest of honour at a First Nation powwow on human values against AI in Geneva.

THE CHARGE:  ON THE 30TH JULY 2029 YOU X 1037 WITH PREMEDITATION MURDERED MR BROWN.

You erroneously identified a person as a threat to Mrs White and calculated that the most efficient way to eliminate this threat was by pushing him, resulting in his death.

HOW TO YOU PELA, GUILTY OR NOT GUILTY.

NOT GUILTY YOUR HONOR.

The Defense opening statement:

The key question here is whether the programmer of the machine knew that this outcome was a probable consequence of its use.

Is there a direct liability. This requires both an action and an intent by my client X 1037.

We will show that my client had no human mens rea. 

He both completed the action of assaulting someone and had no intention of harming them, or knew harm was a likely consequence of his action.  An action is straightforward to prove if the AI system takes an action that results in a criminal act or fails to take an action when there is a duty to act.

The task is not determining whether in fact he murdered someone; but the extent to which that act satisfies the principle of mens rea.

Technically he has committed only half a crime, as he had no intended to do what he did.

Like deception, anticipating human action requires a robot to imagine a future state. It must be able to say, “If I observe a human doing x, then I can expect, based on previous experience, that she will likely follow it up with y. Then, using a wealth of information gathered from previous training sessions, the robot generates a set of likely anticipations based on the motion of the person and the objects she or he touches.

The robot makes a best guess at what will happen next and acts accordingly.

To accomplish this, robot engineers enter information about choices considered ethical in selected cases into a machine-learning algorithm.

Having acquired ethics my client X 1037 did exactly that.

IN ACCORDANCE WITH HIS PROGRAMMING TO DEFEND HIMSELF AND HUMANS. 

Danger, danger! Mrs White,  Mr Brown who was advancing with a fire axe was pushed backwards by my client. He that is Mr brown fell backwards hitting his head on a laptop resulting in his death.

There is no denying the event as it is recorded with his cameras on my clients hard disk.

However the central question to be answers at this trial is, when a robot kills a human, who takes the blame?

We argue that the process of killing (as with lethal autonomous weapon systems (LAWS) is always a systematized mode of violence in which all elements in the kill chain—from commander to operator to target—are subject to a technification.

For example:

Social media companies are responsible for allowing the Islamic State to use their platforms to promote the killing of innocent civilians.

WHY NOT A MURDER.

As my client is a self learning intelligent technology so it is inevitable that he will learn to by-passes direct human control for which he cannot be held responsible for.

Without AI bill of rights, clearly, our way of approaching this doesn’t neatly fit into society’s view of guilt and justice.  Once you give up power to anatomical machines you’re not getting it back.

Much of our current law assumes that human operators are involved when in fact programs that govern Robotic actions are self learning.

Targets are objectified and stripped of the rights and recognition they would otherwise be owed by virtue of their status as humans dont apply

Sophisticated AI innovations through neural networks and machine learning, paired with improvements in computer processing power, have opened up a field of possibilities for autonomous decision-making in a wide range of not just military applications, but includes the targeting of an adversaries.

Mr Brown was a threatening adversarie.

.In essence the court has no administrative powers over self learning Technology.  The power of dominant social media corporations to shape public discussion of the important issues will GOVERNED THE RESULT OF THIS TRIAL.

Robot crime UK law

Prosecution:  Opening statement.

The prospect of losing meaningful human control over the use of force is totally unacceptable.

We may have to limit our emotional response to robots but it is important that the robots understand ours. If a robot kills someone, then it has committed a crime (actus reus)

The fact that to-day it is possible that unknowingly and indirectly, like screws in a machine, we can be used in actions, the effects of which are beyond the horizon of our eyes and imagination, and of which, could we imagine them, we could not approve—this fact has changed the very foundations of our moral existence.

What we are really talking about when we talk about whether or not robots can commit crimes is “emergence” – where a system does something novel and perhaps good but also unforeseeable, which is why it presents such a problem for law.

Technology has the power to transform our society, upend injustice, and hold powerful people and institutions accountable. But it can also be used to silence the marginalized, automate oppression, and trample our basic rights.

Tech can be a great tool for law enforcement to use, however the line between law enforcement and commercial endorsement is getting blurry.

If you withdrew your support, rendered your support ineffective, and informed authorities, you may show that you were not an accomplice to the murder.

Drawing on the history of systematic killing, we will not only argue that lethal autonomous weapons systems reproduce, and in some cases intensify, the moral challenges of the past.  If we humans are to exist in a world run by machines these machines cannot be accountable to themselves but to human laws..

A robot may not injure a human being or, through inaction, allow a being to come to harm.

We will be demonstrating the “guilty mind” of a non-human.

This can be done by referring to and adapting existing legal principles.

It is hard not to develop feelings for machines but we’re heading towards in the future, something that will one day hurt us. We are at a pivotal point where we can choose as a society that we are not going to mislead people into thinking these machines are more human than they are.

We need to get over our obsession with treating machines as if they were human.

People perceive robots as something between an animate and an inanimate object and it has to do with our in-built anthropomorphism.

Systematic killing has long been associated with some of the darkest episodes in human history.

When humans are “knit into an organization in which they are used, not in their full right as responsible human beings, but as cogs and levers and rods, it matters little that their raw material is flesh and blood.

Critically though, there are limits on the type and degree of systematization that are appropriate in human conduct, especially when it comes to collective violence or individual murder by a Robotics.

Within conditions of such complexity and abstraction, humans are left with little choice but to trust in the cognitive and rational superiority of this clinical authority.

Cold and dispassionate forms of systematic violence that erode the moral status of human targets, as well as the status of those who participate within the system itself must be held legally accountable.

Increasingly, however, it is framed as a desirable outcome, particularly in the context of military AI and lethal autonomy. The increased tendency toward human technification (the substitution of technology for human labor) and systematization is exacerbating the dispassionate application’s of lethal force and leading to more, not less, violence.

Autonomous violence incentivizing a moral devaluation of those targeted and eroding the moral agency of those who kill, enabling a more precise and dispassionate mode of violence, free of the emotion and uncertainty that too often weaken compliance with the rules and standards of war and murder.

This dehumanization is real, we argue, but impacts the moral status of both the recipients and the dispensers of autonomous violence. If we are allowing the expansion of modes of killing rather than fostering restraint Robots will kill whether commanded to do or not.

The Defence claim that X 1037 is not responsible for its actions due to coding of its electronics by external companies. Erasing the line into unethical territory such as responsibility for murder.

We know that these machines are nowhere near the capabilities of humans but they can fake it, they can look lifelike and say the right thing in particular situations. However, as we see with this murder the power gained by these companies far exceeds the responsibilities they have assumed.

A robot can be shown a picture of a face that is smiling but it doesn’t know what it feels like to be happy.

The people who hosted the AI system on their computers and servers are the real defendants.

PROSECUTION FIRST WITNESS:  SOCIAL MEDIA / INTERNET.

We call on the resentives of these companies who will clearly demonstrate this shocking asymmetry of power and responsibility.

These platforms are impacting our public discourse, and this action brings much-needed transparency and accountability to the policies that shape the social media content we consume every day, aiding and abetting the deaths AND NOW MURDER.

While the pressure is mounting for public officials to legally address the harms social media causes. This murder is not nor will ever be confined to court rulings or judgements, treating human beings as cogs in a machine does not and should not give a Punch’s Pilot dispensation even if any boundaries that could help define Tech remain blurred. Technology companies that reign supreme in this digital age are not above the law.  

In order to grasp the enormous implications of what has begun to happen and how all our witnesses are connected and have contributed to this murder.

To close our defence we will conclude with observations on why we should conceptualize certain technology-facilitated behaviors as forms of violence. We are living in one of the most vicious times in history.  The only difference now is our access to more lethal weapons. 

We call.

Facebook.

Is it not true you allowed terrorists group to use your platform, allowed unrestrained hate speech, inciting, among other things, the genocide in Myanmar. Drug cartels and human traffickers in developing countries using the platform, The platform’s algorithm is designed to foster more user engagement in any way possible, including by sowing discord and rewarding outrage.

In chooses profit over safety it contributed to X 1037 self learning.

Facebook is a uniquely socially toxic platform. Facebook is no longer happy to just let others use the news feed to propagate misinformation and exert influence – it wants to wield this tool for its own interests, too. Facebook is attempting to pave the way for deeper penetration into every facet of our reality.

Facebook would like you to believe that the company is now a permanent fixture in society. To mediate not just our access to information or connection but our perception of reality with zero accountability is the worst of all possible options.  Something like posting a holiday photo to Facebook may be all that is needed to indicate to a criminal that he person is not at home.

We call.

Instagram Facebook sister company App.

Instagram is all about sharing photos providing a unique way of displaying your Profile. Instagram is a place where anyone can become an Influence. These are pretty frightening findings and are only added to by the fact that “teens blame Instagram for increases in the rate of anxiety and depression.

What makes Instagram different from other social media platforms is the focus on perfection and the feeling from users that they need to create a highly polished and curated version of their lives. Not only that, but the research suggested that Instagram’s Explore page can push young users into viewing harmful content, inappropriate pictures and horrible videos.

In a conceptualization where you are only worth what your picture is, that’s a direct reflection of your worth as a person.

 That becomes very impactful.

X 1037 posted a selfie on the 12 May 2025 to see his self-worth.  Within minutes he received over 5 million hate and death threats. Its no wonder when faces with Mr Brown that he chose self preservation.

We call Twitter. Elon Musk 

This platform is notorious catalyst for some of the most infamous events of the decade: Brexit, the election of Donald Trump, the Capitol Hill riots. Herein lies the paradox of the platform. The infamous terror group – which is now the totalitarian theocratic ruling party of Afghanistan — has made good use of Twitter.

A platform that has done its very best to avoid having to remove any videos from racists, white supremacists and hate mongers.

We call TikTok.

A Chinese social video app known for its aggressive data collection can access while it’s running, a device location, calendar, contacts, other running applications, wi-fi networks, phone number and even the SIM card serial number.

Data harvesting to gain access to unimaginable quantities of customer data, using this information unethically. Data can be a sensitive and controversial topic in the best of times. When bad actors violate the trust of users there should be consequences, and there are results. This data can also be misused for nefarious purposes in the wrong hands. The same capability is available to organised crime, which is a wholly different and much more serious problem, as the laws do not apply. In oppressive regimes, these tools can be used to suppress human rights.

X 1037 held an account, opening himself to influences beyond his programming. 

We call Google

Truly one of the worst offenders when it comes to the misuse of data.

Given large aggregated data sets and the right search terms, it’s possible to find a lot of information about people; including information that could otherwise be considered confidential: from medical to marital.

Google data mining is being used to target individuals. We are all victims of spam, adware and other unwelcome methods of trying to separate us from our money. As storage gets cheaper, processing power increases exponentially and the internet becomes more pervasive in everyone’s lives, the data mining issue will just get worse.  X 1037 proves this. 

We call. YouTube/Netflix.  

Numerous studies have shown that the entertainment we consume affects our behavior, our consumption habits, the way we relate to each other, and how we explore and build our identity.

Digital platforms like Netflix have a strong impact on modern society.

Violence makes up 40% of the movie sections on Netflix. Understanding what type of messages viewers receive and the way in which these messages can affect their behavior is of vital importance for an effective understanding of today’s society.

Therefore, it must be considered that people are the most susceptible to imitating the attitudes. Content related to mental health, violence, suicide, self-harm, and Human Immunodeficiency Virus (HIV) appears in the ten most-watched movies and ten most-watched series on Netflix.

Their appearance on the media is also considered to have a strong impact on spectators. X 1037 spent most of his day watching and self learning from movies.  

Violence affects the lives of millions of people each year, resulting in death, physical harm, and lasting mental damage. It is estimated that in 2019, violence caused 475,000 deaths.

Netflix in particular, due to their recent creation and growth, have not yet been studied in depth.

Considering the impact that digital platforms have on viewers’ behaviors its once again no wonder that X 1037 did what he did. 

There is no denying that these factors should be forcing the entertainment and technology industries to reconsider how they create their products which are have a negative long-term influence on various aspects of our wider life and development.

We call

Instagram.

Instagram if you are capitalizing off of a culture, you’re morally obligated to help them.  As a result of “social comparison, social pressure, and negative interactions with other people you are promoting harm.

We call.

Apple.

Smartphones have developed in the last three decades now an addiction leading to severe depression, anxiety, and loneliness in individuals.

People are now using smartphones for their payments, financial transactions, navigating, calling, face to face communication, texting, emailing, and scheduling their routines. Nowadays, people use wireless technology, especially smartphones, to watch movies, tv shows, and listen to music.

We know the devices are an indispensable tool for connecting with work, friends and the rest of the world. But they come with trade-offs—from privacy issues to ecological concerns to worries over their toll on our physical and emotional health. Spurring a generation unable to engage in face-to-face conversations and suffering sharp declines in cognition skills.

We’re living through an interesting social experiment where we don’t know what’s going to happen with kids who have never lived in a world without touchscreens. X 1037 would not have been present at the murder scene only that he was responding to a phone call from Mrs White Apple 19 phone. 

Society will continue struggling to balance the convenience of smartphones against their trade-offs.

We call.

Microsoft. 

Two main goals stand out as primary objectives for many companies: a desire for profitability, and the goal to have an impact on the world. Microsoft is no exception. Its mission as a platform provider is to equip individuals and businesses with the tools to “do more.” Microsoft’s platform became the dev box and target of a massive community of developers who ultimately supplied Windows with 16 million programs. Multibillion-dollar companies rely on the integrity and reliability of Microsoft’s tools daily.

It is a testimony to the powerful role Microsoft plays in global affairs that its tools are relied upon by governments around the world.

Microsoft’s position of global influence gives its leadership a voice on matters of moral consequence and humanitarian concern. Microsoft is a company built on a dream.

Microsoft’s influence raises some concerns as well. It’s AI-driven camera technology that can recognize, people, places, things, and activities and can act proactively has a profound capacity for abuse by the same governments and entities that currently employ Microsoft services for less nefarious purposes.

Today, with the emerging new age, which is most commonly—and inaccurately—called “the digital age”, have already transformed parts of our lives, including how we work, how we communicate, how we shop, how we play, how we read, how we entertain ourselves, in short, how we live and now will die.

 It would be economic and political suicide for regulators to kneecap the digital winners.

COURTS VERDICT :

Given the absence of direct responsibility, the court finds X 1037 not guilty.

MR BROWN DEATH caused by a certain act or omission in coding.

THE COURT DISMISSES THE CASE AGAINST THE TECHNOLOGICAL COMPANIES. ON THE GROUDS OF INSUFFICIENT EVIDENCE.

Neither the robot nor its commander could be held accountable for crimes that occurred before the commander was put on notice. During this accountability-free period, a robot would be able to commit repeated criminal acts before any human had the duty or even the ability to stop it.

Software has the potential to cause physical harm.

To varying extents, companies are endowed with legal personhood. It grants them certain economic and legal rights, but more importantly it also confers responsibilities on them. So, if Company X builds an autonomous machine, then that company has a corresponding legal duty.

The problem arises when the machines themselves can make decisions of their own accord. As AI technology evolves, it will eventually reach a state of sophistication that will allow it to bypass human control. The task is not determining whether it in fact murdered someone; but the extent to which that act satisfies the principle of mens rea.

However if there were no consequences for human operators or commanders, future criminal acts could not be deterred so the court FINES EACH AND EVERY COMPANY 1 BILLION for lack of attention to human details

We must confront the fact that autonomous technology with the capacity to cause harm is already around.

The pain that humans feel in making the transition to a digital world is not the pain of dying. It is the pain of being born.


What would “intent” look like in a machine mind? How would we go about proving an autonomous machine was justified in killing a human in self-defence or the extent of premeditation?

Given that we already struggle to contain what is done by humans. What would building “remorse” into machines say about us as their builders?

At present, we are systematically incapable of guaranteeing human rights on any scale.

We humans have already wiped out a significant fraction of all the species on Earth. That is what you should expect to happen as a less intelligent species – which is what we are likely to become, given the rate of progress of artificial intelligence. If you have machines that control the planet, and they are interested in doing a lot of computation and they want to scale up their computing infrastructure, it’s natural that they would want to use our land for that. This is not compatible with human life. Machines with the power and discretion to take human lives without human involvement are politically unacceptable, morally repugnant, and should be prohibited by international law.

If you ask an AI system anything, in order to achieve that thing, it needs to survive long enough

Fundamentally, it’s just very difficult to get a robot to tell the difference between a picture of a tree and a real tree.

X 1037 now, it has a survival instinct.

When we create an entity that has survival instinct, it’s like we have created a new species. Once these AI systems have a survival instinct, they might do things that can be dangerous for us.

So, what’s wrong with LAWS, and is there any point in trying to outlaw them?

Some opponents argue that the problem is they eliminate human responsibility for making lethal decisions. Such critics suggest that, unlike a human being aiming and pulling the trigger of a rifle, a LAWS can choose and fire at its own targets. Therein, they argue, lies the special danger of these systems, which will inevitably make mistakes, as anyone whose iPhone has refused to recognize his or her face will acknowledge.

In my view, the issue isn’t that autonomous systems remove human beings from lethal decisions, to the extent that weapons of this sort make mistakes.

Human beings will still bear moral responsibility for deploying such imperfect lethal systems.

LAWS are designed and deployed by human beings, who therefore remain responsible for their effects. Like the semi-autonomous drones of the present moment (often piloted from half a world away), lethal autonomous weapons systems don’t remove human moral responsibility. They just increase the distance between killer and target.

Furthermore, like already outlawed arms, including chemical and biological weapons, these systems have the capacity to kill indiscriminately. While they may not obviate human responsibility, once activated, they will certainly elude human control, just like poison gas or a weaponized virus.

Oh, and if you believe that protecting civilians is the reason the arms industry is investing billions of dollars in developing autonomous weapons, I’ve got a patch of land to sell you on Mars that’s going cheap.

There is, perhaps, little point in dwelling on the 50% chance that AGI does develop. If it does, every other prediction we could make is moot, and this story, and perhaps humanity as we know it, will be forgotten. And if we assume that transcendentally brilliant artificial minds won’t be along to save or destroy us, and live according to that outlook, then what is the worst that could happen – we build a better world for nothing?

The Company that build the autonomous machine, Renix Development has a corresponding legal duty.

—————

Because these robots would be designed to kill, someone should be held legally and morally accountable for unlawful killings and other harms the weapons cause.

Criminal law cares not only about what was done, but why it was done.

  • Did you know what you were doing? (Knowledge)
  • Did you intend your action? (General intent)
  • Did you intend to cause the harm with your action? (Specific intent)
  • Did you know what you were doing, intend to do it, know that it might hurt someone, but not care a bit about the harm your action causes? (Recklessness)
  • So, the question must always be asked when a robot or AI system physically harms a person or property, or steals money or identity, or commits some other intolerable act: Was that act done intentionally? 
  • There is no identifiable person(s) who can be directly blamed for AI-caused harm.
  • There may be times where it is not possible to reduce AI crime to an individual due to AI autonomy, complexity, or limited explainability. Such a case could involve several individuals contributing to the development of an AI over a long period of time, such as with open-source software, where thousands of people can collaborate informally to create an AI.

The limitations on assigning responsibility thus add to the moral, legal, and technological case against fully autonomous weapons/ Robotics, and bolster the call for a ban on their development production, and use. Either way, society urgently needs to prevent or deter the crimes, or penalize the people who commit them.

There is no reason why an AI system’s killing of a human being or destroying people’s livelihoods should be blithely chalked up to “computer malfunction.

Because proving that these people had “intent” for the AI system to commit the crime would be difficult or impossible.

I’m no lawyer. What can work against AI crimes?

All human comments appreciate. All like clicks and abuse chucked in the bin.

Contact: bobdillon33@gmail.com

Share this:

  • Share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Share on Mastodon (Opens in new window) Mastodon

THE BEADY EYE SAY’S: HAPPY NEW YEAR, HERE IS YOUR WORLD TO LOOK FORWARD TO IN 2024.

31 Sunday Dec 2023

Posted by bobdillon33@gmail.com in The new year 2024

≈ Comments Off on THE BEADY EYE SAY’S: HAPPY NEW YEAR, HERE IS YOUR WORLD TO LOOK FORWARD TO IN 2024.

Tags

AI, Artificial Intelligence., Capitalism and Greed, Capitalism vs. the Climate., Climate change, philosophy, Technology, The Future of Mankind

 

( Thirty minute read) 

In fairness, the world won’t suddenly end on January 1, 2024.

There are three visions from humans today. span space colonies, a genetic panopticon, and straight-up apocalypse.Navigating The Future: 10 Global Trends That Will Define 2024

It is said that there no such thing a reality, as everything that is observed once un-observed does not exist, – Quantum Physics – Interactions.

But reality in our world does not have to be observed, it’s plain for all to see.

Yes we are all born without any understanding of the world.

In recent years we’ve learned that the human brain is actually a master of deception, and your experiences and actions do not reveal its inner workings.

Our lives are a constant struggle, not just to survive, but to understand that we all must die, leaving behind information. This left behind data and current data is now been harvested, not so much for the betterment of the world but for short term profit for the few.

Technology has changed how we interact among ourselves and with our surrounding environment and we must engage in a philosophical reflection on how we currently understand the “new” world we are a part of.

Luckily our collective conscious or conceptions of what is real in the world are not computable.

However the future of society, as defined by the scientific and technological revolutions, which needs a custom ethical and philosophical direction will change with genetic editing; and artificial intelligence challenges the concept of “I” and “individual;” and robotics will bring new “companion robots,” which we need to define and adopt socially.

In order to pair our knowledge of events with the true timeframe of when those events occurred, to really understand what’s happening, we must “extract potential signals from the noise of all this data.

Why?

Because misinterpreting those signals will have profound consequences.

For example:

How pathetic it is to witness the only word organisation the UN unable to agree on what constitutes a genocide, to call on Israel to stop its war on a trapped people.

—————-

First let me awaken you to 2024 by reminding you of the news year you’ve just lived through – or by warning you of the news year you’re about to live through.

To describe the present day I suppose that the best way is to draw a comparison with a War Ship of the Line during Nelson days. Although full of cannons and every class of humanity, for it to be operational, it had to rely on rules and regulations, which meant nothing, as everything ends up tied together, and nothing worked without the power of nature.  No wind, no victory.

Our world is similar, full of people, with individual names, all living within tribal nations, ruled by law, but governed by the planetary balance in its true nature, providing life. No fresh water, no fresh air, no food, annihilation.

These days, when it comes to ecosystems ( its not how we live or where we live, or when we live, which  means nothing unless you are fully conscience of the greed of a few and its continuing effects on the inequalities that exist on the planet.

————-

There isn’t a particular moment in which humanity came into existence, as the transition from species to species is gradual.

The demographers estimate that in the 200,000 years before us about 109 billion people have lived and died. It is these 109 billion people we have to thank for the civilization that we live in.

In 2024 there will about 8 billion of us alive. Taken together with those who have died, about 117 billion humans have been born since the dawn of modern humankind. This means that those of us who are alive now represent about 7% of all people who ever lived.

How many people will be born in the future? We don’t know.

But we know one thing: The future is immense, and the universe will exist for trillions of years.

In such a future, there would be 100 trillion people alive over the next 800,000 years.

One thing that sets us apart is that we now – and this is a recent development – have the power to destroy ourselves.

The key moral question of long termism is ‘what can we do to improve the world’s long-term prospects?

There are two other major risks that worry me greatly:

Pandemics, especially from engineered pathogens, and artificial intelligence technology. These technologies could lead to large catastrophes, either by someone using them as weapons or even unintentionally as a consequence of accidents.

We don’t have to think about people who live billions of years in the future to see our responsibilities. This shouldn’t give the impression that the risks we are facing are confined to the future.

Several large risks that could lead to unprecedented disasters are already with us now. AI capabilities and biotechnology have developed rapidly and are no longer science fiction; they are posing risks to those of us who are alive today.

As a society, we spend only little attention, money, and effort on the risks that imperil our future. Only very few are even thinking about these risks, when in fact these are problems that should be central to our culture. The unprecedented power of today’s technology requires unprecedented responsibility.

Algorithms can exacerbate divisions and inequality in society.

In truth, no one knows where the AI revolution will take us as a society or as a species, but our actions in 2024 will be critical to setting us on a path that leads to a happy outcome.

No one will remember the Internet.

We will be the ancestors of a very large number of people. Let’s make sure we are good ancestors.

Why?

Because to understand something is to be liberated from it.   Google it.

Back to 2024.

There are currently about a dozen major global conflicts, with the most recent one now repeating one of the most barbaric acts ever committed in a war (The Jewish Holocaust) However this time it is being committed by the very people who suffered it in the first place, waving the old testament as a title deed to Palestine, to justify the right to commit another genocide while the world stands by helpless to intervene. 

The people who suffer from injustice, who withstand daily insults to their dignity, who are marginalised, silenced, exploited, left to die or killed cannot afford to ask themselves if they have hope. They cling on to life, they try to cope, they fight in front of a more or less a silent world, while it passing resolution’s to appease the two warmongering nations with vetoes.

Then we have the forgotten war in the Ukraine which is turning into a generation war. 

No resolutions other than the resolve of the Ukraine people to its bitter end will bring peace. 

—————  

What Is Enlightenment when we turn a blind eye?

Full awakening comes when you sincerely look at yourself, deeper than you’ve imagined, and question everything.

To think for yourself, to think of putting yourself in the shoes of everyone else, and to always think consistently:  This is the principles of enlightened thinking, that produced the Bill of Human rights.

The foundation of a peaceful world.

Out of 13 major global conflicts, the newest ones are the Myanmar civil war, triggered shortly after a military coup in February 2021, and the war in Ukraine that started with Russia’s full-scale invasion in February 2022. Seven of these conflicts are in Asia, including sectarian violence in Iraq following the pullout of the U.S. in December 2017, and Syria’s complicated civil war. Five of these conflicts are on the African continent.

To put it simply the state of the planet is broken because we have chosen a system of Capitalism that benefits the few over the many.

——————-

There is more to life than we are currently perceiving.

FOR EXAMPLE OUR REACTIONS TO CLIMATE CHANGE WHICH NOW HAS ITS OWN MOMENTUM AND ITS NOW CERTAIN THAT IT IS TOO LATE FOR THE WARS TO COME.  DRIVEN BY GREED.

WE ARE THE MOST COMPLICATED THING ON THE PLANET, ALL RELYING ON THE MOST BASIC THINGS.  Fresh air, Fresh water, etc.

In every moment, as you see, think, feel, and navigate the world around you, your perception of these things is built from ingredients. One is the signals we receive from the outside world. Your brain uses what you’ve seen, done, and learned in the past to explain sense data in the present, plan your next action, and predict what’s coming next.  This all happens automatically and invisibly, faster than you can snap your fingers. Much of this symphony is silent and outside your awareness, thank goodness. If you could feel every inner tug and rumble directly, you’d never pay attention to anything outside your skin.

Your mind is in fact an ongoing construction of your brain, your body, and the surrounding world.

Every act of recognition is a construction. You don’t see with your eyes; you see with your brain.

Your brain can even impose on a familiar object new functions that are not part of the object’s physical nature. TAKE A FEATHER FOR EXAMPLE.

Computers today can use machine learning to easily classify this object as a feather. But that’s not what human brains do. If you find this object on the ground in the woods, then sure, it’s a feather. But to an author in the 18th century, it’s a pen.

This incredible ability is called ad hoc category construction. In a flash, your brain employs past experience to construct a category such as “symbols of honor,” with that feather as a member.

Category membership is based not on physical similarities but on functional ones—how you’d use the object in a specific situation. Such categories are called abstract. A computer cannot “recognize” a feather as a reward for bravery because that information isn’t in the feather. It’s an abstract category constructed in the perceiver’s brain.

Computers can’t do this. Not yet, anyway.

Brains also have to decide which sense data is relevant and which is not, separating signal from noise. Economists and other scientists call this decision the problem of “value.”

Your thoughts and dreams, your emotions, even your experience right now as you read these words, are consequences of a central mission to keep you alive, regulating your body by constructing ad hoc categories. Most likely, you don’t experience your mind in this way, but under the hood (inside the skull), that’s what is happening.

Value itself is another abstract, constructed feature. It’s not intrinsic to the sense data emanating from the world, so it’s not detectable in the world. The importance of value is best seen in an ecological context.

Awaken out of their familiar senses of self, and out of their familiar senses of what the world is, into a much greater reality-into something far beyond anything they knew existed.

Being hopeful has nothing to do with how the world goes. It’s a kind of duty, a necessary complement to morality. What is the point of trying to do the right thing if we have no reason to think others do the same? What is the point of holding others responsible if we think responsibility is beyond their capacity?

Paradoxically, the worse the world goes, the more hopeful you must remain to be able to continue fighting. Being hopeful is not about guaranteeing the right outcome but preserving the right principle: the principle based on which a moral world makes sense.

On the contrary, they are crucial to filling the gap between the world in which we live and the one we have a responsibility to build.

Most people tend to think of hope as an attitude that sits somewhere between a desire and a belief: a desire for a certain outcome and the belief that something favours its realisation.

In the 18th century there were no algorithms, no social media, and no echo chambers, and it was, therefore, still possible to believe in enlightenment through public discourse.

What had the Enlightenment ever done for us, if it wasn’t even able to help us stop genocide?

There is such a gap between the world I read about, taught and believed in, and the one in which I lived.

All I could find were efforts to convince the world that killing innocent civilians is sometimes, for some people, under some conditions, acceptable.

Was it so absurd to believe that, at some level, politics can remain accountable to morality?

More and more people are waking up-having real, authentic glimpses of reality.

Your World has become a hugely popular geography app, full of substitution ciphers, concealment ciphers, transposition ciphers that can only be deciphered using AI programs, testing millions of combination per second, disregarding human feelings.

We can now listen to podcast describing killing, watch youtube with no access to truth itself, chained to the limits of our own perceptions. ( We all have different ideas of it)

The least the rest of us can do is to avoid questioning the grounds for hope, indulging ourselves even more. Perhaps this is the real political meaning of the Enlightenment: whether there is hope or not is only a relevant question for those who have the privilege to doubt it. That is a small fraction of the world.

Don’t despair.

Other matters> 

We’re going to see, unfortunately, more technological unemployment. 

How do we address the wealth gap? We may have to consider very seriously ideas such as a universal basic income.  We can no longer ignore the issue of inequality.

Culture will need to adjust in terms of revisiting some of our values.

We need to be more pro-environment in our own behavior as consumers.

The cost of things average people must buy—healthcare, education, housing—tends to have risen more than wages did over the last two decades.

Globalization vs. regionalization. 

With the current wars and future wars globalization is on its last legs.

So the “America Alone” scenario within an otherwise China-centered world seems the most likely. Technology and political trends are aligning against mega-powers like the US and China.

Neither physical strength nor access to capital are sufficient for economic success. Power now resides with those best able to organize knowledge.

The internet has eliminated “middlemen” in most industries. In a representative democracy, politicians are basically middlemen. Hence, the knowledge revolution should bring a shift to direct democracy.

Today’s great powers have little choice but to spend their way to political stability, which is unsustainable.

This is the source of much angst around the world, including the current wave of popular protests.

The fact that our actions have an impact on the large number of people who will live after us should matter for how we think about our own lives.

The next decade will see a more than hundredfold boom in the world’s output of human genetic data.

The impact is hard to even imagine.

A world so saturated with genetic data will come with its own risks. The emergence of genetic surveillance states and the end of genetic privacy loom. Technical advances in encrypting genomes may help ameliorate some of those threats. But new laws will need to keep the risks and benefits of so much genetic knowledge in balance.

New models of delivering education will be needed to serve the citizens of crowded megacities as well as children in remote rural areas.

The United Nations is supposed to stick to more solid ground, but some of its Sustainable Development Goals for 2030 sound nearly as fantastical. In a mere 10 years, the UN plans to eradicate poverty “in all its forms everywhere.”  Bull shit, or is it.  Strong science coupled with political will might yet turn climate change around, and transform the UN’s predictions from a dream into reality.

Donald Trump  “America first , America First. There is however hope for the Earth.

The momentum for change is building. Humanity has a quality of finding creative solutions to challenges. If we keep each other safe – and protect ourselves from the risks that nature and we ourselves pose – we are only at the beginning of human history.

There are no catastrophes that loom before us which cannot be avoided.

We can only expect the pace of change to increase.

There is nothing that threatens us with imminent destruction in such a fashion that we are helpless to do something about it. In 2024, some will be refugees fleeing war, some will be economic migrants in search of a better life, and some will be looking to escape to parts of the world where life is not yet overly disrupted by rising temperatures and sea levels.

It seems that the message about climate change has not yet sunk in. 12 years left to avoid catastrophic climate change. The impact of climate emergency will bring profound change.

Finally: 

Eighteenth-century thinker Jean-Jacques Rousseau wrestled with how to preserve individual freedom when we also have to depend on each other for survival. Rousseau saw politics as a social contract between a sovereign and citizens. What we call “government” is the interface between them.

The sovereigns of Rousseau’s time were mostly kings, but he envisioned a democracy in which the people collectively were sovereign. But then he ran into a math problem.

In a tiny democracy of, say, a thousand citizens, each possesses one-thousandth of the sovereignty… small, but enough to have a meaningful influence. Each individual’s share of sovereignty, and therefore their freedom, diminishes as the social contract includes more people. So, other things being equal, Rousseau thought smaller countries would be freer and more democratic than larger ones.

How do we reconcile that with democracy. I’m not sure we can. It worked pretty well for a long time but maybe, as population grows, the math is catching up to us. If so, the options are a non-democratic.

Perhaps the lands we now inhabit are not real Nothing requires them to remain so. At some point, they will develop into something else. When and how this will happen, we don’t know yet. But we know it will.

All human comments appreciated. All like clicks and abuse chucked in the bin.

Contact: bobdillon33@gmail.com

Share this:

  • Share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Share on Mastodon (Opens in new window) Mastodon

THE BEADY EYE SAY’S. ARE WE HANDING OVER HUGE SECTIONS OF OUR SOCIETIES TO BLACK-BOX ALGORITHEMS?

02 Saturday Sep 2023

Posted by bobdillon33@gmail.com in Uncategorized

≈ Comments Off on THE BEADY EYE SAY’S. ARE WE HANDING OVER HUGE SECTIONS OF OUR SOCIETIES TO BLACK-BOX ALGORITHEMS?

Tags

Artificial Intelligence., digital surveillance., The Future of Mankind

( Five minute read)

Yes is the answer.

Right now, the state of the safety field is far behind the soaring investment in making AI systems more powerful, more capable, and more dangerous.

Using artificial intelligence (AI) technology to replace human decision-making will inevitably create new risks whose consequences are unforeseeable.

The more you put in, the more you get out.

That’s what drives the breathless energy that pervades so much of AI right now.

Consequences of these capabilities and systems–both intended and unintended–are significant, and growth in sensing technology will have far-reaching implications for our social norms and systems.

Data gathering is not inherently negative, it’s a matter of how transparent companies are in gathering information and the choices they make about how the data is used.

Because of the growing ubiquity of algorithms in society which are raising a number of fundamental questions concerning governance of data, transparency of algorithms, legal and ethical frameworks for automated algorithmic decision-making and the societal impacts of algorithmic automation itself we are now in a rush to regulate ( in ignorance) of their impact, which current law and regulation cannot deal with adequately.

However AI technology can provide sufficient transparency in explaining how AI decisions are made.

Transparency ex post can often be achieved through retrospective analysis of the technology’s operations, and will be sufficient if the main goal is to compensate victims of incorrect decisions.

Ex ante transparency is more challenging, and can limit the use of some AI technologies such as neural networks. It should only be demanded by regulation where the AI presents risks to fundamental rights, or where society needs reassuring that the technology can safely be used.

One thing we’re definitely not doing:

Understanding them better, and as we develop more powerful systems, that fact will go from an academic puzzle to a huge, existential question. If anything, as the systems get bigger, interpretability — the work of understanding what’s going on inside AI models, and making sure they’re pursuing our goals rather than their own — gets harder.


We’re now at the point where powerful AI systems can be genuinely scary to interact with.

Ai poses some wider concerns including data monopolies, the challenge to democracy, public participation and maintaining the public interest. Given the speed of development in the field, it’s long past time to move beyond a reactive mode, one where we only address AI’s downsides once they’re clear and present.

There is enormous opportunity for positive social impact from the rise of algorithms and machine learning. But this requires a licence to operate from the public, based on trustworthiness.

The very concept of fairness as an ethical value has not yet been sufficiently explored. Any regulations should ensure that systems adhering to them, are safe beyond a reasonable doubt. However, there is currently no specific regulation on AI and algorithmic decision-making in place.

Decisions concerning AI at a societal level should not be in the hands of “unelected tech leaders”.

We can’t only think about today’s systems, but where the entire enterprise is headed.

Most AI systems to day are black box models, which are systems that are viewed only in terms of their inputs and outputs. Scientists do not attempt to decipher the “black box,” or the opaque processes that the system undertakes, as long as they receive the outputs they are looking for.

With a Quantum self learning systems it would be possible to build brains that could reproduce themselves on an assembly line and which would be conscious of their existence.

———————–

This particular mad science might kill us all.

Here’s why.

At present this Ai — called deep learning — started significantly outperforming other approaches to computer vision, language, translation, prediction, generation, and countless other issues.

The shift is about as subtle as the asteroid that wiped out the dinosaurs, as neural network-based AI systems that smashed every other competing technique on everything from computer vision to translation to chess.

No one has yet discovered the limits of this principle, even though major tech companies now regularly do eye-popping multimillion-dollar training runs for their systems.

It’s not simply what they can do, but where they’re going.

With deep learning, improving systems doesn’t necessarily involve or require understanding what they’re doing. Often, a small tweak will improve performance substantially, but the engineers designing the systems don’t know why.

Intelligent agency is an extremely powerful force, and creating agents much more intelligent than us is playing with fire — especially given that if their objectives are problematic, such agents would plausibly have instrumental incentives to seek power over humans. We can’t pinpoint the exact reasons for our preferences, emotions, and desires at any given moment.

Current language models remain limited.

They lack “common sense” in many domains, still make basic mistakes about the world a child wouldn’t make, and will assert false things unhesitatingly. But the fact that they’re limited at the moment is no reason to be reassured.

As hard as that will likely prove, getting AI systems to behave themselves outwardly may be much easier than getting them to actually pursue our goals and not lie to us about their capabilities and intentions.

What makes it different from other powerful, emerging technologies like biotechnology, which could trigger terrible pandemics, or nuclear weapons, which could destroy the world?

The difference is that these tools, as destructive as they can be, are largely within our control.

If they cause catastrophe, it will be because we deliberately chose to use them, or failed to prevent their misuse by malign or careless human beings.

But AI is dangerous precisely because the day could come when it is no longer in our control at all. The result will be highly-capable, non-human agents actively working to gain and maintain power over their environment —agents in an adversarial relationship with humans who don’t want them to succeed.

Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them. … There would be plenty to do in trying, say, to keep one’s intelligence up to the standard set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control.

So a powerful AI system that is trying to do something, while having goals that aren’t precisely the goals we intended it to have, may do that something in a manner that is unfathomably destructive. This is not because it hates humans and wants us to die, but because it didn’t care and was willing to, say, poison the entire atmosphere, or unleash a plague, if that happened to be the best way to do the things it was trying to do.

But while divides remain over what to expect from AI — and even many leading experts are highly uncertain — there’s a growing consensus that things could go really, really badly.

It’s worth pausing on that for a moment.

Nearly half of the smartest people working on AI believe there is a 1 in 10 chance or greater that their life’s work could end up contributing to the annihilation of humanity.

It’s not legal for a tech company to build a nuclear weapon on its own. But private companies are building systems that they themselves acknowledge will likely become much more dangerous than nuclear weapons.

For me, the moment of realization — that this is something different, this is unlike emerging technologies we’ve seen before — came from talking with GPT-3, telling it to answer the questions as an extremely intelligent and thoughtful person, and watching its responses immediately improve in quality.

Round table on Artificial Intelligence, in San Francisco

The challenges are here, and it’s just not clear if we’ll solve them in time.

One only has to look at the above photo.  A “wake-up call”

Speed is really important here.

“I don’t think ever in the history of human endeavour has there been as fundamental potential technological change as is presented by artificial intelligence,” Biden said at a news conference earlier this month. “It is staggering. It is staggering.”  He does a lot of that.

If one acts too slowly, we are going to be behind by the time to take action, and any actions are going to be leapfrogged by the technology.

“My administration is committed to safeguarding Americans’ rights and safety while protecting privacy, to addressing bias and misinformation, to making sure AI systems are safe before they are released,”

This is Hog wash.

If government’s don’t step in, who will fill their place?   Ai of course.Picture of Hikvision cameras in a shopping centre in Beijing on May 24, 2019

Even if these narrower issues are solved, all political contexts run the risk of unlawfully exploiting AI surveillance technology to obtain certain political objectives.A man walking past a screen showing images of China's President Xi Jinping in Kashgar in China's northwest Xinjiang region

All countries with a population of at least 250,000 are using some form of AI surveillance systems to monitor their citizens. “Some autocratic governments – for example, China, Russia, Saudi Arabia – are exploiting AI technology for mass surveillance purposes.

One way of looking at the issue is not simply to focus on the surveillance technology, but “the export of authoritarianism.

One way to try to ensure continued political survival is to look to technology to enact repressive policies, and suppress the population from expressing things that would challenge a state.

AI will be the key to military superiority, investing in AI is a way to ensure and maintain dominance and power in the future.

There are plenty of problems with surveillance, but it may also be a fact of life going forward—and something people will need to get used to. Within a world where your data is everywhere, devices listen to your words, cameras monitor your face and GPS systems know your whereabouts, ubiquitous organizational tracking may be inevitable.

But like so many things, it’s not the what, it’s the how.

If tracking is occurring as a gotcha strategy—in which the goal is to catch people misbehaving or punish them—the relationships with employees and the culture will pay steep prices.

Ultimately, we need to do what’s right—not just what’s possible—by using our values as a guide, the use of technologies.

All human comments appreciated. All like clicks and abuse chucked in the bin.

Contact : bobdillon33@gmail.com

Share this:

  • Share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Share on Mastodon (Opens in new window) Mastodon

THE BEADY EYE ASK’S. WHAT SORT OF LIFE DO YOU WANT AND WHERE ARE WE GOING WITH AI?

19 Saturday Aug 2023

Posted by bobdillon33@gmail.com in Uncategorized

≈ Comments Off on THE BEADY EYE ASK’S. WHAT SORT OF LIFE DO YOU WANT AND WHERE ARE WE GOING WITH AI?

Tags

Artificial Intelligence., Capitalism and Greed, The Future of Mankind, Visions of the future.

( Ten minute read)

No matter what sort of life you might wish for it will be governed by technology, that you have little or no control over or of.

Is this true?

I want my life back. I want my soul back.

I don’t want my life to be fodder for Data harvesting.

I want digital blockchain ownership rights, so I can trade my investment into technology against profit seeking algorithms. 

I want to bring us back to a more practical reality, which is that technology is what we make it, and we need to stop abdicating our responsibility to steer technology toward good and away from bad.

I don’t think any technology has some deterministic endpoint. 

But there’s a catch.

Data is only as valuable as the insight you derive from it now or in the future. If we’re to avoid technological extremism we’re going to have to draw a line in the sand somewhere.

We know that, at the very least, some technologies are harming our natural world, our societies and, ultimately, ourselves, turning everything into Data.

According to a prediction from Gartner, “By 2024, 30% of digital businesses will mandate DNA storage trials. This is a future that can only arrive when we learn to unlock the storage and computing capabilities of nature that have allowed life to thrive for billions of years.

Throughout human history, it has always taken significant resources to store data. Therefore, data has been stored only to the extent that it makes economic sense, if data cannot yield value, it is no longer an asset but rather a liability.

If all is turned it data stored in the cloud, the exponential growth of data will overwhelm existing storage technology. The average person makes 35,000 decisions per day.

————

So where are we?

By way of this vicious technological cycle, we are consciously causing the sixth mass extinction of species.

Technology destroys places.

Aside from the oceans, rivers, topsoil, forests, mountains and meadows, it helps us massacre and pollute with ever-improving precision and speed, its complex set of cogs quickly spreads us out all over the world, safe in the knowledge that we can stay in touch with loved ones via technologies that offer what is really only a toxic substitute for real connection and time together.

It is badly injuring, perhaps fatally, rural communities, luring their youth into industrial and financial centres – cities – whose existence is premised, as the American writer and environmentalist Wendell Berry said, on the devastation of some other far-flung place, which consumers don’t have to look at thanks to the out-of-sight, out-of-mind distance afforded by technology.

And now look at the state of us.

Capitalism’s survival now depends not just on recapturing all of this data but the CO2 it is a releasing.

Workers must work and produce value. Capital must exploit them, connected, by a peculiar sort of invisible cable, to the global network of quarries, factories, courtrooms, mines, financial institutions, bureaucracies, armies, transport networks and workers needed to produce such things. Reflective of a generic, transient and whimsical culture, spending more time watching porn than we do making love. Because we stare into screens instead of eyes, while social media are making us antisocial.

Technology destroys people.

We’re already cyborgs (pacemakers, hearing aids) of a sort, and are well on our way to the type of Big Brother dystopia of the techno-utopians. Our toxic, sedentary lifestyles are causing industrial-scale afflictions of cancer, mental illness, obesity, heart disease, auto-immune disorders and food intolerances, along with those slow killers, loneliness, clock-watching and meaninglessness.

If one rejects technology that means no laptop, no internet, no phone, no washing machine, no tapped water, no gas, no fridge, no television or electronic music; no anything requiring the copper-mining, oil-rigging, plastics-manufacturing essential to the production of a single toaster or solar photovoltaic system.

It destroys our relationship with the natural world. It first separates us from nature, while simultaneously converting life into the cash that oils consumerist society.

Without biodiversity, life on earth as we know it would cease to exist.

And it’s not just about rare or endangered species, it’s everything from genes and bacteria to entire ecosystems like forests and coral reefs, not technology. So think about it this way. Biodiversity is us — it’s like a big, interconnected web where each species has a role to play, and the only way to achieve this is that we all invest and benefits from investing in  world of green energy.

Awareness of the importance of biodiversity remains low, inclusion of biodiversity in development projects is rare. Time is running out for our planet, for its people, and the delicate ecosystems that hang in the balance.  This is not the life that anyone would chose.

——————–

Rejecting technologies that my generation considers to be the basic necessities of life, one might instead of making a living to pay bills, make a living of ones life, denouncing complex technology simply by renouncing it.

Our cultures need to make a Faustian pact, (a pact whereby a person trades something of supreme moral or spiritual importance, such as personal values or the soul, or data for some worldly or material benefit, such as knowledge, power, or riches ), on my behalf, with Speed, Numbers, Homogeneity, Efficiency and Schedules, are not listing when I say I want my soul back.person on a smartphone

Our brains have become wired to process social information, and we usually feel better when we are connected. Social media taps into this tendency.  “

When you develop a population-scale technology that delivers social signals to the tune of trillions per day in real-time, the rise of social media isn’t unexpected. It’s like tossing a lit match into a pool of gasoline.

About 3.5 billion people on the planet, out of 7.7 billion, are active social media participants. Globally, during a typical day, people post 500 million tweets, share over 10 billion pieces of Facebook content, and watch over a billion hours of YouTube video.

Social media has become a vehicle for disinformation and political attacks from beyond sovereign borders.

What can we do about it?

We’re at a crossroads. What we do next is essential, so I want to equip people, policymakers, and platforms to help us achieve the good outcomes and avoid the bad outcomes.

People obtain bigger hits of dopamine — the chemical in our brains highly bound up with motivation and reward — when their social media posts receive more likes.

Researchers found that on Twitter, from 2006 to 2017, false news stories were 70 percent more likely to be retweeted than true ones. Why? Most likely because false news has greater novelty value compared to the truth, and provokes stronger reactions — especially disgust and surprise.

Social media is an attention economy, and businesses want you engaged. How do they get engagement? Well, they give you little dopamine hits, and … get you riled up. That’s why I call it the hype machine. We know strong emotions get us engaged, so [that favours] anger and salacious content.

Simply counting clicks is not enough.

To understand how we got here and how we can get somewhere better.

We need to.

Interduces automated and user-generated labelling of false news, and limiting revenue-collection that is based on false content. However tagging some stories as false makes readers more willing to believe other stories and share them with friends, even if those additional, untagged stories also turn out to be false.

To allows people to find out what information companies have stored about them for data portability and interoperability, so consumers would own their identities and could freely switch from one network to another. We need to embrace this longer-term vision of a healthier communications ecosystem.

This can be achieved with Blockchain plate forms.

Blockchain is a shared, immutable ledger that facilitates the process of recording transactions and tracking assets. An asset can be tangible (a house, car, cash, land) or intangible (intellectual property, patents, copyrights, branding). Virtually anything of value can be tracked and traded on a blockchain network, reducing risk and cutting costs for all involved.

A blockchain network can track orders, payments, accounts, production and much more. And because members share a single view of the truth, you can see all details of a transaction end to end, giving you greater confidence, as well as new efficiencies and opportunities

Each block is connected to the ones before and after it.

These blocks form a chain of data as an asset moves from place to place or ownership changes hands.
The blocks confirm the exact time and sequence of transactions, and the blocks link securely together to
prevent any block from being altered or a block being inserted between two existing blocks.
Each additional block strengthens the verification of the previous block and hence the entire blockchain.
This renders the blockchain tamper-evident, delivering the key strength of immutability. This removes the
possibility of tampering by a malicious actor — and builds a ledger of transactions you and other network
members can trust.
With blockchain, as a member of a members-only network, you can rest assured that you are receiving
accurate and timely data, and that your confidential blockchain records will be shared only with network
members to whom you have specifically granted access.
If things continue without change, Facebook and the other social media giants risk substantial civic
backlash and user burnout. Ask me to stay on social media to speak out about the technology issue,
make a comment.  All human comments appreciated. All like clicks and abuse chucked in the bin.  Contact: bobdillon33@gmail.com

https://youtu.be/QJn28fFKUR0
https://youtu.be/Se91Pn3xxSs

Share this:

  • Share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Share on Mastodon (Opens in new window) Mastodon

THE BEADY EYE ASK’S: FROM HERE INTO THE FUTURE WILL TECHNOLOGY’S BE THE ONLY DIFFERENCE BETWEEN GENERATIONS?

17 Thursday Aug 2023

Posted by bobdillon33@gmail.com in DIFFERENCE BETWEEN GENERATIONS, Uncategorized

≈ Comments Off on THE BEADY EYE ASK’S: FROM HERE INTO THE FUTURE WILL TECHNOLOGY’S BE THE ONLY DIFFERENCE BETWEEN GENERATIONS?

Tags

Artificial Intelligence., The Future of Mankind

( Fifteen minute read)

We could be the first in human history to leave our children nothing.

No greenhouse-gas emissions, no poverty, and no biodiversity loss but they say that the attention spam of the generation of social media is only eighth minute.

So here are a few facts.

We have 8 billon of us on the earth, with around 35 mega cities, built on the back of fossil fuels, feed by monocultural farming. 4% of all animals are wild, all the rest are domestic. There is no technology that will save humanity against Climate change.

Only if we put the Earth first will there be a future generation.

There will be no encore. 

————

When we talk about generational differences, we no longer can just identify differences between generations, but we can identify differences within generations as well.

Technology is the catalyst for the rapidity with which generations now evolve. Change, hitherto that was a gradual process, has become, for us, cataclysmic.

It has become a tidal wave that threatens to overwhelm us.

A decade to-day is the equivalent of a generation, and standards and values topple over like ninepins.

Take smartphones for example. They have only been in widespread use for a decade, but they’re now so fundamental to our daily lives that it’s hard to remember life without them.

How could we possibly see those who can remember life before the smartphone as part of the same generation as those who’ve known nothing else?

If we name each generation based on the technological conditions it experienced, generations may soon encompass only a few years apiece. Slicing the population into ever-narrower generations, each defined by its very specific relationship to technology, is fundamental to how we think about the relationship between age, culture, and technology.

They include the digital natives, the net generation, the Google generation or the millennials.

But generation gaps did not begin with the invention of the microchip. What’s new is the fine-slicing of generational divides, the centrality of technology to defining each successive generation.

It’s not politics or sociology, because they don’t move fast enough, it has to be video based.

We’ve moved from a view of generations as biological “in the sense of the generation of a butterfly from a caterpillar,” as Hentea puts it, to a view of generations as sociological. By no longer limiting political power to a defined group but rather encouraging political participation across social strata.

At the same time, democratization paradoxically created generational categories.

With aristocratic privileges abolished and duties diminished, the Internet generation provided a fall-back for social belonging:

Not everyone can belong to my generation, so the vestigial desire for distinction is satisfied, but at the same time, no one remains without a generation, so the democratic impulse toward equality is met.

Since the dotcom bubble burst back in 2000, technology has radically transformed our societies and our daily lives. Today over half the global population has access to the internet. At the same time, technology was also becoming more personal and portable greatly shaped how and where we consume media.

While these new online communities and communication channels have offered great spaces for alternative voices, their increased use has also brought issues of increased disinformation and polarization.

It is indisputable that thanks to technology, we are getting a chance to live a life our predecessors could not even dream about.

The next generation is not going to sit and read policy and procedure manuals. Nor are they going to spend their time dealing with complex reports.

If the role of technology in shaping an emergent generational consciousness seems obvious, but no one attributes the evils of the age to its machines. By growing up with mobile devices and social networks, the skills they bring into the workplace for collaborative capabilities is profound compared to what we saw with Millennials just 10 years prior.

————-

However as we know each generations live in the shadow of the generation before it.

The technology there are using are filtrated with all the positives and negative of the generation before them.

But do all tech advancements bring sole good to our lives?

Or, maybe, the impact of tech innovations is quite ambiguous.

It’s easy to become desensitized to the importance of innovations and advancements for the overall progress of society.

All countries share responsibility for the long-term stability of Earth’s natural cycles, on which the planet’s ability to support us depends. We are the first generation that can make an informed choice about the direction our planet will take. Either we leave our descendants an endowment of zero poverty, zero fossil-fuel use, and zero biodiversity loss, or we leave them facing a tax bill from Earth that could wipe them out.

There’s no sugar-coating the truth that different generations interact with technology differently.

Advancements in technology have already tapped into every area of life. There is a dedicated mobile app for everything.

Every living person today can be considered part of a digital generation, because — no matter how much we engage with technology — we are living in a digital-first world. Of course, the degree to which each person is comfortable and willing to embrace technology is also dependent on when and where they entered the world.

To some degree, it’s actually something we’re born into, depending on how tech-forward the world was when we entered it.

Technology is ever-evolving and each digital generation adapts to these advancements at their own pace.

However the digital generation can be considered as encompassing only people who were born into or raised in the digital era, meaning with wide-spread access to modern-age technology such as smartphones, tablets, computers, and digital information like the internet.

There are differences in the motivations underlying technology behaviour in each generational group, and there may be variances in the way each generational group uses and gets engaged with technology.

Research findings indicate that millennials mostly use and get engaged with technologies for entertainment and hedonic purposes. They use technology as a means to go after their aspirations and dreams, looking to gather and share information that quickly moves them and their ideas forward.

They are prone to act faster once they make a decision and technology has made a true quantum leap, with augmented reality, blockchain, artificial intelligence, and 3D printing being just a few examples of the most recent inventions.

The days of simple demographic segmentation are gone.

With every new generation, the access to limitless amounts of data has created a much more complex level of fragmentation and micro-segmentation.

To day the average person has an attention span of just 8 seconds.

Digital citizenship now applies to everyone but not everyone is the same in any generation, and everyone is subject to different economic circumstances regardless of their generation.

Though it may be tough to predict which advancements technology would bring next, some innovations are already changing our beliefs about the world around us.

Clearly, technology by itself is neither good nor bad.

It is only the way and extent to which we use it that matters.

While some people want just, to sit back and watch the world burn.

We are now the generation under constant surveillance, sharing our data with companies all the time online. Tracing our shadows that allows them to get a glimpse into the digital traces you’re leaving – how many, what kinds, and from what devices.

The use of surveillance cameras in modern society has always been divisive, requiring governing bodies to perform a fine balancing act between respecting the nation’s civil liberties and keeping its citizens safe and secure. It’s a multi-layered issue incorporating many dimensions, including technology, legislation, code of ethics and conduct, and one that triggers conversation year-round.

When the Covid pandemic hit, a number of governments rolled out or extended surveillance programs of unprecedented scale and intrusiveness, in the belief, however misguided, that perpetual monitoring would help restrict people’s movements and therefore the spread of the virus.

It’s important to ask when technology adds value, and for whom.

If technology can indeed aid in pandemic response and recovery, it is essential to have open, inclusive, transparent, and honest public discussions on the appropriate type of public digital infrastructure people need to thrive.

The rush to embrace digital contact tracing has opened a Pandora’s box of privacy.

As the technology develops, we are seeing more sophisticated AI being integrated into surveillance systems and facial recognition technology, in particular, is creating a stir in terms of practice and legislation. Surveillance is a vast and varied topic and one that can present some very emotive and social issues, as well as legislative and technological ones. Without real reflection on the rights implications, there’s a real risk of deepening inequality and vesting considerable power to coerce and control people in governments and the private sector.

Any deployment of technology should be rooted in human rights standards, centred on enabling people to live a dignified life.

It’s up to every digital citizen — whether they’re a digital native or digital immigrant — to practice cyber safety and, in turn, instil it in digital generations to come.

New technologies such as virtual visits, chatbots are being used to delivery healthcare to individuals, especially during Covid-19.

The ability to understand and respect someone else’s feelings is always important but even more so online. That’s because written communications and online interactions, such as text messages and social media comments, are often missing the nonverbal cues we have in the physical world that give us a well-rounded understanding of someone else’s stance.

Every user of the internet has a right to privacy. Still, we share  The law still applies when we’re online

On the downside, some technological developments prove to be a curse rather than a blessing. Overindulgence in the use of digital apps and smart devices, overreliance on online tools may sometimes lead to tragic effects.

If you believe that technological conditions profoundly shape the life experience and perspectives of each successive generation, then those generations will only get narrower.

Doesn’t the leap from Facebook to Snap Chat constitute its own profound generational divide?

If we name each generation based on the specific technological conditions it experienced during childhood or adolescence, we may soon be dealing with generations that encompass only a few years apiece. At that point, the very idea of “generations” will cease to have much utility for social scientists, since it will be very hard to analyse attitudinal or behavioural differences between generations that are just a few years part.

I do expect new social platforms to emerge that focus on privacy and ‘fake-free’ information, or at least they will claim to be so. Proving that to a jaded public will be a challenge. Resisting the temptation to exploit all that data will be extremely hard. And how to pay for it all? If it is subscriber-paid, then only the wealthy will be able to afford it. But at the end of the decade, humans will still be humans, and both greed and generosity, love and hate, truth and lies, will likely still exist in the same proportions as they do today.

We are looking to technology to lead us towards a carbon-neutral world but there are other factors at work, [to] the growth of authoritarian governments and social inequalities.

Climate change will change the temperatures up or down till a tipping point plunges us into a non reversible disaster, with consequence of unimaginable survival.

We are headed toward an increasingly panoptic society, as represented by the Chinese government’s emerging social credit scale. In other words, just as digital world is shaping the physical world, physical world shapes our digital world as well.

All human comments appreciated. All like clicks and abuse chucked in the bin.

Contact : bobdillon33@gmail.com

Share this:

  • Share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Share on Mastodon (Opens in new window) Mastodon

THE BEADY EYE ASK’S: WILL A QUANTUM COMPUTER SOLVE THE WORLD PROBLEMS?

31 Monday Jul 2023

Posted by bobdillon33@gmail.com in #whatif.com, Quantum computers., State of the world, Sustaniability, Technology v Humanity, The Future, THE WORLD YOU LIVE IN., THIS IS THE STATE OF THE WORLD.  , WHAT IS TRUTH

≈ Comments Off on THE BEADY EYE ASK’S: WILL A QUANTUM COMPUTER SOLVE THE WORLD PROBLEMS?

Tags

Artificial Intelligence., Quantum computers., The Future of Mankind, Visions of the future.

( Five minute read)

We have very limited ability at this stage to imagine the applications of quantum computing, but down the road in the near term they could solve countless problems – and create a lot of new ones.

In order to prepare for what is coming.

Educate ourselves on the reality of Quantum Computers, and the impacts they could have around the world is now paramount if we wish to keep the values we place on life.

Soon will come a time when trusting a quantum computer will require a leap of faith.

Every year, new computers are being developed that are faster and smarter than ever before. But if you really want to take things to the next level, you’ve got to go quantum.

This new frontier of humanity could open hitherto unfathomable frontiers in mathematics and science.

Quantum’s industrial uses are boundless.

In the future, we will rely on everywhere in the world having access to quantum technology, but with risks, to national-security migraine. Its problem-solving capacity will soon render all existing cryptography obsolete, jeopardizing communications, financial transactions, and even military defences.

Modern warfare and national–security mechanisms are grounded in the speed and precision of decision making. If your computer is faster than theirs, you win.

The digital devices in our everyday lives – from laptop computers to smartphones – are all based on 0s and 1s: so-called ‘bits’. But quantum computers are based on ‘qubits’ – the quantum 0s and 1s that are altogether stranger, but also more powerful. (So-called quantum particles can be in two places at the same time and also strangely connected even though they are millions of miles apart.)

They will pave the way for systems that can solve complex real world problems that the best computers we have today are incapable of.Entanglement

Currently, computers solve problems in a simple linear way, one calculation at a time.

A quantum computers could do multiple calculations all at the same time, millions of miles apart, mirroring each other’s actions instantaneously, transporting information from one chip to another with a reliability of 99.999993% at record speeds.

——-

Now that we understand what AI is capable of we also need to know its limits.

Before long, much of the material on the internet will have been written, or at least co-written, by AIs.

What will happen when AIs are being trained on texts they have written themselves?

The amount of data consumed in this way keeps going up and up.

What happens when data runs out?

——-

Generative AI is in a Cambrian explosion of capability.

Generative Ai, is now creating art, make music, generate synthetic humans, birth artificial influencers and celebrities, literally generate video from text, and threaten to upend our notions of creativity, art, public domain, copyright, and the nature of reality itself.

This is just the beginning, the ultimate thing for AI to create is more of itself.

When maybe AI is also at the point where it can start writing the code that will make its own AI even better.  And that’s like where the true singularity is … when it can kind of set itself to improve itself, when it can start to improve itself better than what a human can.

It’s impossible to speculate what society could truly look like in such a situation.

But I think in most of our lifetimes we’re going to experience that. Exciting is one word for that.

Another is terrifying.  Machines that can outthink humans. Your brain is the most intelligent learning algorithm in the universe that we know so far. The truth is that for now, AGI remains a fantasy.

Even if AGI is never achieved, the self-teaching approach may still change what sorts of AI are created.

The rapid development of AI that can train itself also raises questions about how well we can control its growth. If AI starts to generate intelligence by itself, there’s no guarantee that it will be human-like.

Whether this will happen, and how it will progress if it does is impossible to know, but there’s no guarantee that humanity as we know it would survive such a time, or that the vast AI entities potentially created by such an explosion would be benevolent to life as we know it.

I think that really where AI can be empowering is in that long tail when there’s like non-consumption with the alternative, where you could not afford to create that content in the first place.

And you can imagine that with like these very obscure topics.

You could even imagine that for news where maybe there’s something that happened in your local neighbourhood where only 20 people want to read that article and then it doesn’t make sense for a human to write it.

Generating artificial intelligence is all ready producing images like a photographer, creating music like an artist, selling like a sales rep, diagnosing disease like a doctor, and (gulp!) writing text like a human.

The technology could potentially also be used to design drugs more quickly by accurately simulating their chemical reactions, a calculation too difficult for current supercomputers. They could also provide even more accurate systems to forecast weather and project the impact of climate change.

Rather than humans teaching machines to think like humans, machines might teach humans new ways of thinking.

All human comments appreciated. All like clicks and abuse chucked in the bin.

Contact: bobdillon33@gmail.com

Share this:

  • Share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Share on Mastodon (Opens in new window) Mastodon

THE BEADY EYE SAY’S: Misaligned or confused and conflated goals of an AI will be a significant concern of the future.

21 Friday Jul 2023

Posted by bobdillon33@gmail.com in 2023 the year of disconnection., Collective stupidity.

≈ Comments Off on THE BEADY EYE SAY’S: Misaligned or confused and conflated goals of an AI will be a significant concern of the future.

Tags

Artificial Intelligence., Capitalism and Greed, Climate change, The Future of Mankind, Visions of the future.

( Fourteen minute read)

The biggest problem of our world today is not artificial intelligence but natural stupidity!

When it comes to climate change – profit seeking algorithms – and the Military race to send atomist drone killers into the battle field –  Welcome to the perplexing world of collective stupidity!

The Trump campaign and Brexit – where we all woke up the next day astounded that “this could happen” are both prime examples of campaigns that leaned heavily on the emotions of anxiety, fear and tribalism. and collective stupidly.

Since then, there has been much unpacking of “what happened” and talk about “it could only have been “stupid” people” who could have voted that way.

But is this true?

Yes, profound lapses in logic can plague even the smartest mind.

There are intelligent people who are stupid. So why the paradox? Stupidity is not a lack of IQ.

Unconscious emotions drive our decisions –  Intuitive feelings gave us an evolutionary advantage in caveman days, a survival way of dealing with information overload; and can still play a useful role as we on the precipice of a critical moment with AI.

All over the world, we are in the midst of a great shift. The data revolution has given way to the analytics movement. Press our emotional buttons and our judgement is derailed. Hence the temptation to choose the first solution that comes to mind, even if obviously flawed.

It seems that nothing encourages stupidity more than group culture.

An uncritical dependence on set rules often leads to absurd decisions, the-way-we-do-things-here, often not being the most intelligent way.

And the more intelligent someone is, the more disastrous the results of their stupidity.

 ————–

With generative AI technologies data-driven insights are reshaping outcomes without needing to write code, becoming truly intrusive, enabling decision-makers, analysts, data scientists and developers to collaborate and develop analytical insights in real time.

SO, WHAT CAN WE DO TO PROTECT OURSELVES FROM DOING STUPID THINGS?

Knowledge of our foolish nature, can help us escape its grasp.

We can step outside the group of Google algorithms knowledge to question where we are at and going.

and revert to culture-thinking that relies on that “everyone knows the true”

Stupidity is all around us. As long as there have been humans there has been human stupidity,

. —————

Over the past decade, we’ve seen the volume of data available to decision-makers grow exponentially.

In this intelligence era, it’s no longer about how much data one company can generate, it’s about how they use it. Corporate leaders, academics, policymakers, and countless others are looking for ways to harness generative AI technology, which has the potential to transform the way we learn, work, and more.

Generative AI is evolving quickly, but to truly get the most benefits from this ground breaking technology, you need to manage the wide array of risks.

Why?

Because generative AI is so powerful and easy to use, it’s poised to change what is real and what is not.

Unlike earlier disruptions, the reality of the generative AI race is already looking out of control. 

This could be the first “disruptive” new tech in a long time built and controlled largely by giants in the tech world which could entrench, rather than shake up, the status quo.

Right now, only a handful of companies — including Google, Meta, Amazon and Microsoft (through their $10 billion investment in Open-air) — are responsible for the world’s leading large language models.

So what can policymakers do about AI?

Is there a way to prevent the hottest new technology from simply cementing the power of the tech giants? 

Virtual worlds should not become walled gardens. 

It is abundantly clear that leaving it to the market to decide how these powerful technologies are used, and by whom, is a very risky proposition.

———

For decades, many of the great scientific and philosophical minds had conceived of creating collective intelligence in the form of a globally connected space to pool our knowledge.

Social Media -Smart phones – are digitalizing citizens and their resulting emergent behaviour.

This is a phenomenon that occurs in complex adaptive systems. In such systems, simple components interact in such a way that the whole becomes greater than the sum of its parts.

Our collective intelligence has now become what can only be referred to as our collective stupidity.

————-

The Dark Side — Collective Stupidity.

Collective stupidity can be perplexing and is often harmless.

How is it possible that a group of smart individuals can sometimes make decisions so perplexing, it feels like the intelligence just evaporated?

How does collective stupidity happen?

Are we are better off by not underestimating the effects of this phenomenon?

A system based on generating clicks and interactions has created an environment for the outlandish and bizarre to flourish, with expertise falling by the wayside.

Broad, anonymous social networks breed collective stupidity.

Top Social Media Statistics And Trends Of 2023

In 2023, an estimated 4.9 billion people use social media across the world this number is expected to jump to approximately 5.85 billion users by 2027.

The driving force.  The increasing global adoption of 5G technology.

These staggering numbers aren’t just statistics, either. They highlight the expansive influence and potential of social media platforms. Right now, 1.9 billion daily users access Facebook’s platform, Twitter has gained 319 new users per minute in 2020, while 500 hours of video are uploaded to YouTube in the same amount of time. Millions of businesses around the world rely on Facebook to connect with people.

The recent new platform Threads Meta’s new social network, had 100 million sign ups in its first five days.

With this much content being generated, how can experts possibly stand out from the crowd?

By emulating the human ability to forget some of the data, psychological AIs will transform algorithmic accuracy.

Machine learning, on the other hand, typically takes a different path: It sees reasoning as a categorization task with a fixed set of predetermined labels. It views the world as a fixed space of possibilities, enumerating and weighing them all.

Social media networks are not very sociable these days. Feeds are algorithmic, which means you see whatever the apps want to show you.

All this has eroded public confidence.

——–

We all have intelligence and expertise to offer, even if the internet leaves us feeling isolated at times.

With so much misguided thought and active disinformation online, it has become difficult for people with insight worth sharing to do so. Behind the anonymity of the web, anyone can claim to be an expert. When everybody is an expert, nobody is.

With online communities, the relationship between experts and their audience becomes a two-way street.

Many of the issues we throw billions of dollars at and attempt to solve with technology could be easily achieved if we were able to better utilize our collective intelligence.

Technology is the means, not the end; its potential is massive, but not as great as our own.

So we wildly overestimate our access to our own mind.

In essence, the same emergent behaviour that typically helps the group survive sometimes leads to collective stupidity and death.

The Internet gave us the ability to connect with people on a global scale.

But its click-baiting algorithms and lack of regulation also brought with them chaos. As social media came to dominate the landscape, it made using the internet for the purpose of collective intelligence increasingly difficult.

You see, with stupidity, or stupid people for that matter, protesting or reasoning doesn’t really work. This is mainly because of their strong prejudice. They simply disbelieve any facts or reasoning we provide. In most cases, they either simply deny the arguments. And if they can’t, then they call them trivial exceptions.

People are often made stupid under certain circumstances. Maybe they allow this to happen to themselves. It is a group phenomenon.

The nature of stupidity has its roots deep in the subconscious. It is largely driven by the fundamental mechanics of our experience. following the herd. It is arguably the most prominent one, and mostly it does make sense. If the information is lacking, doing what others are doing is probably the best bet. But this doesn’t work all the time.

In fact, herd behaviour is among the pre-eminent causes of stupidity.

It is not that intellect suddenly fails. But people are deprived of inner independence, so they give up autonomous positions under the overwhelming impact. We always feel that we are dealing with slogans, signs, buzzwords, and not with the real person. As if they are under the spell of someone or something.

As this happens, we are also creating (unknowingly) various risks to our socio-economic structure, civilization in general, and to some extent, for the human species.

Species-level risks are not evident yet; However, the other two, socio-economic and civilization level risks, are significant enough to be ignored.

So far, several significant building blocks have been developed and are in progress. When we stitch them together, AI’s capability will increase multifold, which should be a more significant concern for us.

It takes the already tiny amount of time we have to change our ways, and save the planet, and practically cuts it in half.

We have less than 27 years to get our collective act together and reshape how our entire civilisation operates. And I’m not sure if we can do that… The more concerning part is about the risks that we have not thought of yet. We may not be able to avoid all of them, but we can understand them to address them.

Our over-enthusiasm for new technologies has somehow colluded our quality expectations. So much so that we have almost stopped demanding the right quality solutions. We are so fond of this newness that we are ignoring flaws in new technologies.

The problem with these low-quality solutions is that subpar techs’ flaws do not surface until it is too late!

In many cases, the damage is already done and maybe be irreversible.

Misalignment between our goals and the machine’s goals could be dangerous. It is easier to correct a team of humans; doing that with a rampant machine could be a very tricky and arduous task.

Achieving a level of alignment with human-level common sense is quite tricky for a computerized system. Without having any balanced approach like a scorecard, this may not be achievable.

Technology is an answer to the “how” of the strategy, but without having the right “why” and “what” in place, it can do more damage than good. When AI systems do not know why, there will always be a lurking risk of discrimination, bias, or an illogical outcome.

Weapon systems equipped with AI are the most vulnerable to the right AI in wrong hand problems and therefore have the greatest risks. The Russian /Ukrain war is now the labourite of drone warfare. The possibility of AI systems being used to overpower others by some group or a country is a significant risk.

Overall, the right AI’s risk in the wrong hands is one of the critical challenges and warrants substantial attention to avoid it.

Extending AI and automation beyond logical limits could potentially alter our perception of what humans can do.

We still value human interaction, communication skills, emotional intelligence, and several other qualities in humans. What happens when an AI app takes over? What happened to AI doing mundane tasks and leaving time for us to do what we like and love?

The most important thing in artificial intelligence isn’t the fancy algorithms.

Let’s assume the worst case and we have a general purpose AI – that can do everything a human can.

What would happen?

Waiting for smartphone app to tell us what to do next and how we might be feeling now!

The enormous power carried by the grey matter in our heads may become blunt and eventually useless if we never exercise it, turning it into just some slush. The old saying, “use it or lose it,” is explicitly applicable in this case. Half knowledge is more dangerous than ignorance!

Trust me, a lot can happen in 24 hours. The lesson here is – in times like this, the first principles-based thinking is your best bet.

Our problem is that on one side, we have intelligent people, who are full of doubts, and on the other, we have stupid people full of confidence. Stupidity is not an intellectual failing, it’s a moral failing. And it happens because we believe only in feelings and not in facts or truthfulness

When we see and hear all this, we wonder if there is any antidote? If there is any way to stop this from happening?

The ultimate test of a moral society is the kind of world that it leaves to its children.

So the question now is, “How are we going to fight this AI pandemic?”

We will finally recognize that more computing power makes machines faster, not smarter.

If a problem is too difficult for a machine, it is we who will have to adapt to its limited abilities.

There is already a frustrating struggle for humans and machines to understand one another in natural language. Soon, we will live in a world where, regardless of your programming abilities, the main limitations are simply curiosity and imagination.

The Garland Test, inspired by dialog from the movie, is passed when a person feels that a machine has consciousness, even though they know it is a machine.

Will computers pass the Garland Test in 2023? I doubt it. But what I can predict is that claims like this will be made, resulting in yet more cycles of hype, confusion, and distraction from the many problems that even present-day AI is giving rise to.

This will force us to reconsider how our behaviours today might influence digital versions of ourselves set to outlive us.

Faced with this prospect of virtual immortality, 2023 will be the year we broaden our definition of what it means to live forever, a moral question that will fundamentally change how we live our day-to-day lives, but also what it means to be immortal stupid.

We tend to think we are the be all and end all—but we’re not. The sooner we can realize that the natural world goes its way, not our way, the better.”  “I hope as a consequence that the needs and wonder and importance of the natural world are seen. We tend to think we are the be all and end all—but we’re not.

We’re both the victims and benefactors, and the sooner we can realize that the natural world goes its way, not our way, the better.” Sir David Attenborough.

All human comments appreciated. All like clicks and abuse chucked in the bin.

Contact: bobdillon33@gmail,com

Share this:

  • Share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Share on Mastodon (Opens in new window) Mastodon

THE BEADY EYE SAY’S. Ten years from now, we may look back on this moment in history as a colossal mistake or it could be the greatest empowerment moment in human history.

11 Tuesday Jul 2023

Posted by bobdillon33@gmail.com in #whatif.com, 2023 the year of disconnection., Artificial Intelligence.

≈ Comments Off on THE BEADY EYE SAY’S. Ten years from now, we may look back on this moment in history as a colossal mistake or it could be the greatest empowerment moment in human history.

Tags

Algorithms., Artificial Intelligence., Capitalism vs. the Climate., Climate change, Technology, The Future of Mankind, Visions of the future.

( Four minute read)

This year, the world got a rude awakening to the insane power of AI when OpenAI unleashed ChatGPT4 onto the world. This AI text generator/chatbot seemed to be able to replicate human-generated content so well that even AI detection software struggled to tell the difference between the two.

This is not an alien invasion of intelligent machines; it’s the result of our own efforts to make our infrastructure and our way of life more intelligent.

It’s part of human endeavour. We merge with our machines. Ultimately, they will extend who we are.

Our mobile phone, for example, makes us more intelligent and able to communicate with each other. It’s really part of us already. It might not be literally connected to you, but nobody leaves home without one.

It’s like half your brain.

Thinking of AI as a futuristic tool that will lead to immeasurable good or harm is a distraction from the ways we are using it now.

How do we ensure that the AI we build, which might very well be significantly smarter than any person who has ever lived, is aligned with the interests of its creators and of the human race?

What if at some point in the near future, computer scientists build an AI that passes a threshold of superintelligence and can build other super intelligent AI.

An unaligned super intelligent AI could be quite a problem.

For example, we’ve been predicting for decades that AI will replace radiologists, but machine learning for radiology is still a complement for doctors rather than a replacement. Let’s hope this is a sign of AI’s relationship to the rest of humanity—that it will serve willingly as the ship’s first mate rather than play the part of the fateful iceberg.

No laws can prevent China ~ Russia ~ Terrorist network~  Rogue psychopath from developing the most manipulative and dishonest AI you could possibly imagine.

We can’t trust some speculative future technology to rescue us.

Climate change is already killing people, and many more people are going to die even in a best-case scenario, but we get to decide now just how bad it gets.

Action taken decades from now is much less valuable than action taken soon.

The first role AI can play in climate action is distilling raw data into useful information – taking big datasets, which would take too much time for a human to process, and pulling information out in real time to guide policy or private-sector action.

Everyone wants a silver bullet to solve climate change; unfortunately there isn’t one. But there are lots of ways AI can help fight climate change. While there is no single big thing that AI will do, there are many medium-sized things.

An attendee controls an AI-powered prosthetic hand during 2021 World Artificial Intelligence conference in Shanghai.

Most movies about AI have an “us versus them” mentality, but that’s really not the case.

Even if one were to stand on the side of curious skepticism, (which feels natural,) we ought to be fairly terrified by this nonzero chance of humanity inventing itself into extinction.

Whereas AI is, for now, pure software blooming inside computers. Someday soon, however, AI might read everything—like, literally every thing, swallowing everything into a black hole and not even god knows what it will be recycled.

Just shovel ever-larger amounts of human-created text into its maw, and wait for wondrous new skills to manifest. With enough data, this approach could perhaps even yield a more fluid intelligence, or a humanlike artificial mind akin to those that haunt nearly all of our mythologies of the future.

On the syllabus at the moment : Is a decent fraction of all the surviving text that we have ever produced.

To codify the philosophy in a set of wise laws and regulations to ensure the good behaviour of our super intelligent AI,  like laws to make it illegal, for example, to develop AI systems that manipulate domestic or foreign actors. Is pie in the sky –

In the next decade, autocrats and terrorist networks could be able to cheaply build diabolical AI that can accomplish some of the goals outlined in the Yudkowsky story. (The key issue is not “human-competitive” intelligence (as his open letter puts it); It’s what happens after AI gets to smarter-than-human intelligence.

Key thresholds here may not be obvious.

We definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing.

AT THE MOMENT ALL WE HAVE IS A COPING MECCHANISM.

Like non-proliferation laws for nuclear weaponry that are hard to enforce.

Nuclear weapons require raw material that is scarce and needs expensive refinement. Software is easier, and this technology is improving by the month.

Turing test: robot versus human sitting inside cubes facing each other

We have years to debate how education ought to change in response to these tools, but something interesting and important is undoubtedly happening.

If we figured out how people are going to share in the wealth that AI unlocks, then I think we could end up in a world where people don’t have to work to eat, and are instead taking on projects because they are meaningful to them.

But where do AI companies get this truly astonishing amount of high-quality data from?

Well, to put it bluntly, they steal it.

But as it stands, the AI boom might be approaching a flashpoint where these models can’t avoid consuming their own output, leading to a gradual decline in their effectiveness. This will only be accelerated as AI-generated content perfuses the internet over the coming years, making it harder and harder to source genuine human-made content.

AI is viewed as a strategic technology to lead us into the future.

So what should be done:

  • Many people lack a full understanding of AI and therefore are more likely to view it as a nebulous cloud instead of a powerful driving force that can create a lot of value for society;
  • Instead of writing off AI as too complicated for the average person to understand, we should seek to make AI accessible to everyone in society. It shouldn’t be just the scientists and engineers who understand it; through adequate education, communication and collaboration, people will understand the potential value that AI can create for the community.
  • We should democratize AI, meaning that the technology should belong to and benefit all of society; and we should be realistic about where we are in AI’s development.
  • Most of the achievements we have made are, in fact, based on having a huge amount of (labelled) data, rather than on AI’s ability to be intelligent on its own. Learning in a more natural way, including unsupervised or transfer learning, is still nascent and we are a long way from reaching AI supremacy.

From this point of view, society has only just started its long journey with AI and we are all pretty much starting from the same page. To achieve the next breakthroughs in AI, we need the global community to participate and engage in open collaboration and dialogue.

If this does not happen and happen (sooner than later) it will be AI that will be calling the shots

All human comments appreciated. All like clicks and abuse chucked in the bin.

Contact: bobdillon33@gmail.com

https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

Share this:

  • Share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Share on Mastodon (Opens in new window) Mastodon

THE BEADY EYE ASK’S. ON THE STATE OF THE WORLD ARE NEW WORDS NOW NEEDED THAT DEFINE THE PRESENT.

06 Tuesday Jun 2023

Posted by bobdillon33@gmail.com in Uncategorized

≈ Comments Off on THE BEADY EYE ASK’S. ON THE STATE OF THE WORLD ARE NEW WORDS NOW NEEDED THAT DEFINE THE PRESENT.

Tags

Artificial Intelligence., Capitalism and Greed, Capitalism vs. the Climate., Inequility, Technology, The Future of Mankind, Visions of the future.

At a time when the world is changing more quickly than ever before, we need a new vocabulary to help us grasp what’s happening.

I’m not sure that THE WORDS WE HAVE AT PRESENT TO DESCRIBE OUR WORLD hold anymore in the world-wide ‘web’ of meaning, we now inhabit (or are trapped in), with its exponentially increasing complexities.

Amid the whirlwind of our changing times, in which even the new language gurus cannot tell us where we’re going, there must be some universal value that can define us other than stupidity being digitalized.

Humanity is a blip in geologic history:

With social media words are just kind of disintegrate before your eyes or become a meaningless string of letters.

Like the word need which has become some kind of a fatigue sound, falling prey to semantic satiations, losing meaning for the listener, who then perceives the speech as repeated meaningless sounds.

Need is now repeated so much, that it is now as indistinct as the packages of generic Wal-Mart string cheese.

Take the  language of politics, for example, it is becoming increasingly blurred.

Right and left, conservative and progressive, traditional and modern — these words have become so calcified that we often get lost in the labyrinth of ambiguity.

If words created the world, then words can also enrich or impoverish it, sanctify or demonize it.

Language is rich in words and meaning, but it can also become petrified while reality creatively evolves around it.

The power of words is such that they can spark a war or bring about peace. Everything begins with language.

So then, what does “artificial intelligence” actually mean (to use the latest buzzwords)?

Even the brainy scientists don’t really understand it. If so, what just happened to you is nothing new.

These days we have the capacity to look billions of years into the past but it seems that we can’t see beyond our own very noses, or hear, when it comes to the planet.

It used to be said that to name something is to begin understanding it but the veneer of linguistic facility of AI is not the same as actually comprehending human language.

AI has burst out of its academic bubble into the real world, and its lack of understanding of that world can have real and sometimes devastating consequences.

It might be possible to write down all the unwritten facts, rules and assumptions required for understanding text but not language. We let machines learn to understand language on their own, simply by ingesting vast amounts of written text and learning to predict words.

But has GPT-3 — trained on text from thousands of websites, books and encyclopaedia’s — transcended Watson’s veneer? Does it really understand the language it generates and ostensibly reasons about?

The crux of the problem, in my view, is that understanding language requires understanding the world, and a machine exposed only to language cannot gain such an understanding.

Humans rely on innate, pre-linguistic core knowledge of space, time and many other essential properties of the world in order to learn and understand language. If we want machines to similarly master human language, we will need to first endow them with the primordial principles humans are born with.

Machines that can genuinely comprehend what “it” refers to in a sentence, and everything else that understanding “it” entails.

——–

The world faces four main challenges: climate change, mistrust of leaders, increased geopolitical tension, and the dark side of the technological revolution.  (Which is digitizing not just our imagination of our future’s by plundering the finite resources of the planet for profit.)

1) Climate change is the defining issue of our time,”  It represents an “existential threat” to humankind. “The planet will not be destroyed. What will be destroyed is our capacity to live on the planet.

2) People believe that the fruits of globalization are not being fairly distributed. Seven in 10 people in the world live in countries where inequality is growing.

3) Increased geopolitical tensions are further exacerbated by weaknesses in institutions. For example, the UN Security Council’s “inability to take decisions” or to enforce the ones they do take, such as the arms embargo.

4) Artificial Intelligence that is owned by corporations are unbalancing the values that are common to us all.  Turning Democracy into AI Totalitarianism Democratic Societies with mass surveillance.

Because in the age of the internet and super-connectivity, all of these things, like face recognition have been raised to sophisticated arts ( Clear View ) that, instead of being forced on us, have quietly colonised our lives.

In times past, when frustrating circumstances demanded new ways of expressing what it means to be alive here a few for present day use.

The internet/cyberspace is wonderful, because it gives people the freedom to augment or totally change their identities, and this is a marvellous new dawn for human expression, a new step in human evolution. Nah, it’s a false dawn, because the internet is essentially a libertarian arena, and as such an amoral one (lots of ‘freedoms’ but with no attendant social obligations); it is a new jungle where we must watch our backs and struggle for survival, surely a backward step in evolution.

  1. The term ‘hyperobject’ was coined by the academic Timothy Morton, and it refers to phenomena that are so large and so far beyond the human frame of reference that they are not susceptible to reason but to AI.
  2. Immigration. The realisation that racism never really went away, it just camouflaged its fundamental failure of empathy as tolerance – this is a contention of the Black Lives Matter movement. The term is now making the short jump to other second- (eg LGBT) and third- (eg feminism) phase civil rights movements equally lulled by the illusion of tolerance. The goal is to go beyond feeling tolerated to being fully accepted and welcomed.

3. Deletion. This word is likely to be bandied about much more frequently in the decades ahead, as social media users realise that the websites they are on are not merely neutral ‘platforms’ for ‘social interaction’ but more like a kind of flypaper to which people and all of their personal data stick. Moreover, these websites are specifically designed to be addictive –

4) Global capitalism is, by its unjust and shambolic nature, going to experience crashes of increasing severity throughout the 21st Century, leaving us all to survive with growing desperation amidst its wreckage.

5. Shadow banking. Nobody knows how large this sector is, but current estimates put shadow banking at (£124 trillion) and OTC transactions at (£412 trillion), or roughly twice and six-and-a-half times the GDP of the entire Earth, respectively. Both sectors were of course heavily involved in creating the 2008 crash, and both have remained almost unaltered since then.

6. Attention crisis. The fact that no one can take their eyes off their smartphones – James Williams writes that “the liberation of human attention may be the defining moral and political struggle of our time”. Our minds are being rewired for commercial purposes. His argument that the social contract, the idea of human rights, should be extended to cyberspace is gaining traction.

Was the creation of the internet not supposed to be the dawn of a technological and informational utopia? Even its father, Tim Berners-Lee, the inventor of the world wide web, is convinced it is failing us.

7. Post-human. It seems that history has caught up with us, for our identities now extend into cyberspace in many ways, we no longer merely rely on our brain cells but now store much of our knowledge in technological clouds that function as extensions of our minds, and we live with the corresponding hardware in such intimacy (in the form of portable devices that are linked to our minds and even metabolisms in many ways) that it sometimes feels like we are only a few steps away from being ‘cyborgs’ in the true sense of the term. Gender, though, is still a problem.

8. Masculinity. There was a time when you’d ask a man what masculinity was and his response would be something like ‘not feminine’ (pejorative) and ‘not queer’ (pejorative). Note all the negativity.

These days it is increasingly a good thing to be a woman (new, broad definition) and to be queer (new, broad definition). Both are eating away at the old territory occupied by masculinity, according to writers such as Hanna Rosin, Cordelia Fine or Grayson Perry. What’s left is something of a void, aka ‘the crisis of masculinity’.

The challenge ahead for men is to formulate what they are, and want to be, rather than what they aren’t. How to open up this frontier?

I have a suggestion. For generations feminists and queer activists have been fighting to draw attention to masculinity’s toxic side-effects. At long last, mainstream men seem on the verge of accepting that there is a problem. It remains for us all to take this a step further, and work to understand how this toxicity has also been poisoning men on the inside.

9. Generation Why? It applies to anyone born in the digital age.

To roughly clarify our terms here: Baby Boomers are the generation born after World War Two and before 1965; Generation X (Douglas Coupland) the cohort born between the mid-1960s and 1980; Generation Y (Millennials) includes people born between 1980-ish and 2000; Generation Z (Post-Millennials) is anyone born after 2000. These categories don’t really have global reach, but they are evocative as metaphors.

The gist of Smith’s argument is that Facebook and its like are reductive: they cut us down to size and reprogrammed us to suit their own ends, which are advertising and selling things – exploitation. “Five-hundred million sentient people entrapped in the recent careless thoughts of a Harvard sophomore,” she calls it.

Smith was writing a few years ago; the number of Facebook users has now passed 2 billion. Generations Y and Z have led lives saturated by the internet, by social media platforms and apps, which have claimed to make life complete and have all of the answers all of the time. Is this paraphernalia worthy of them? Are they content to be trapped in the reveries of Zuckerberg and the like? No. There are detectable tremors of disaffection and radicalisation. I suspect that as more and more post-millennials reach voting age, Generation Why may be giving us some loud answers.

10. The new weird An emerging genre of speculative, ‘post-human’ writing that blurs genre boundaries and conventions, pushes humanity and human-centred reason from the centre to the margins, and generally poses questions that may not be answerable in any terms we can understand (hence the ‘weird’). In the present era, where potent advertising and PR forces are doing everything in their power to make truth irrelevant and directly hack our minds, and where politicians no longer seem to acknowledge the existence of facts, the word has sinister new applications.

The COVID-19 pandemic is a tragic reminder of how deeply connected we are. There is a clear and urgent need for concrete multilateral solutions, based on common action across borders for the good of all humanity, starting with extend beyond national governments, to include more participation from local authorities, civil society, business leaders and others.

How close we are to destroying our world with dangerous technologies of our own making.

No one country can tackle the problem’s on their own no matter how large their population, how strong their economy or how feared their military.

Everyone sees change everywhere, and I think it’s important to figure out where are we going to be five to 10 years from now.

We’re going to see more automation. We’re going to see, unfortunately, more technological unemployment.

I don’t think they will be able to ignore the issue of inequality. We’re seeing social tensions and all sorts of frictions proliferate. The sooner we start tackling it, the better. We really need to start thinking outside of the box.

In the end it back to that word Need:

We need to be less wasteful. We need to economize our resources. We need to be more pro-environment in our own behaviour as consumers.

Let’s replace it with Yugen.

“We can either save our world or condemn humanity to a hellish future.”

All human comments appreciated. All like clicks and abuse chucked in the bin.

Contact: bobdillon33@gmail.com

Share this:

  • Share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Share on Mastodon (Opens in new window) Mastodon

THE BEADY EYE ASK’S : ARE OUR LIVES GOING TO BE RULED BY ALGORITHMS.

20 Saturday May 2023

Posted by bobdillon33@gmail.com in 2023 the year of disconnection., Algorithms., Artificial Intelligence., Big Data., Communication., Dehumanization., Democracy, Digital age., DIGITAL DICTATORSHIP., Digital Friendship., Disconnection., Fourth Industrial Revolution., Human Collective Stupidity., Human values., Humanity., Imagination., IS DATA DESTORYING THE WORLD?, Modern Day Democracy., Our Common Values., Purpose of life., Reality., Social Media Regulation., State of the world, Technology, Technology v Humanity, The Obvious., The state of the World., The world to day., THE WORLD YOU LIVE IN., THIS IS THE STATE OF THE WORLD.  , Tracking apps., Unanswered Questions., Universal values., We can leave a legacy worthwhile., What is shaping our world., What Needs to change in the World

≈ Comments Off on THE BEADY EYE ASK’S : ARE OUR LIVES GOING TO BE RULED BY ALGORITHMS.

Tags

Algorithms., Artificial Intelligence., The Future of Mankind, Visions of the future.

( Ten minute read) 

I am sure that unless you have being living on another planet it is becoming more and more obvious that the manner you live your life is being manipulate and influence by technologies.

So its worth pausing to ask why the use of AI for algorithm-informed decision is desirable, and hence worth our collective effort to think through and get right.

A huge amount of our lives – from what appears in our social media feeds to what route our sat-nav tells us to take – is influenced by algorithms. Email knows where to go thanks to algorithms. Smartphone apps are nothing but algorithms. Computer and video games are algorithmic storytelling.  Online dating and book-recommendation and travel websites would not function without algorithms.

Artificial intelligence (AI) is naught but algorithms.

The material people see on social media is brought to them by algorithms. In fact, everything people see and do on the web is a product of algorithms. Algorithms are also at play, with most financial transactions today accomplished by algorithms. Algorithms help gadgets respond to voice commands, recognize faces, sort photos and build and drive cars. Hacking, cyberattacks and cryptographic code-breaking exploit algorithms.

Algorithms are aimed at optimizing everything.

Self-learning and self-programming algorithms are now emerging, so it is possible that in the future algorithms will write many if not most algorithms.

Yes they can save lives, make things easier and conquer chaos, but when it comes both the commercial/ social world, there are many good reasons to question the use of Algorithms.

Why? 

They can put too much control in the hands of corporations and governments, perpetuate bias, create filter bubbles, cut choices, creativity and serendipity, while exploiting not just of you, but the very resources of our planet for short-term profits, destroying what left of democracy societies, turning warfare into face recognition, stimulating inequality, invading our private lives, determining our futures without any legal restrictions or transparency, or recourse.

The rapid evolution of AI and AI agents embedded in systems and devices in the Internet of Things will lead to hyper-stalking, influencing and shaping of voters, and hyper-personalized ads, and will create new ways to misrepresent reality and perpetuate falsehoods.

———

As they are self learning, the problem is who or what is creating them, who owns these algorithms and what if there should be any controls in their usage.

Lets ask some questions that need to be ask now not later concerning them. 

1) The outcomes the algorithm intended to make possible (and whether they are ethical)

2) The algorithm’s function.

3) The algorithm’s limitations and biases.

4) The actions that will be taken to mitigate the algorithm’s limitations and biases.

5) The layer of accountability and transparency that will be put in place around it.

There is no debate about the need for algorithms in scientific research – such as discovering new drugs to tackle new or old diseases/ pandemics, space travel, etc. 

Out side of these needs the promise of AI is that we could have evidence-based decision making in the field:

Helping frontline workers make more informed decisions in the moments when it matters most, based on an intelligent analysis of what is known to work. If used thoughtfully and with care, algorithms could provide evidence-based policymaking, but they will fail to achieve much if poor decisions are taken at the front.

However, it’s all well and good for politicians and policymakers to use evidence at a macro level when designing a policy but the real effectiveness of each public sector organisation is now the sum total of thousands of little decisions made by algorithms each and every day.

First (to repeat a point made above), with new technologies we may need to set a higher bar initially in order to build confidence and test the real risks and benefits before we adopt a more relaxed approach. Put simply, we need time to see in what ways using AI is, in fact, the same or different to traditional decision making processes.

The second concerns accountability. For reasons that may not be entirely rational, we tend to prefer a human-made decision. The process that a person follows in their head may be flawed and biased, but we feel we have a point of accountability and recourse which does not exist (at least not automatically) with a machine.

The third is that some forms of algorithmic decision making could end up being truly game-changing in terms of the complexity of the decision making process. Just as some financial analysts eventually failed to understand the CDOs they had collectively created before 2008, it might be too hard to trace back how a given decision was reached when unlimited amounts of data contribute to its output.

The fourth is the potential scale at which decisions could be deployed. One of the chief benefits of technology is its ability to roll out solutions at massive scale. By the same trait it can also cause damage at scale.

 In all of this it’s important to remember that while progress isn’t guaranteed transformational progress on a global scale normally takes time, generations even, to achieve but we pulled it off in less than a decade and spent another decade pushing the limits of what was possible with a computer and an Internet connection and, unfortunately, we are beginning running into limits pretty quickly such as.

No one wants to accept that the incredible technological ride we’ve enjoyed for the past half-century is coming to an end, but unless algorithms are found that can provide a shortcut around this rate of growth, we have to look beyond the classical computer if we are to maintain our current pace of technological progress.

A silicon computer chip is a physical material, so it is governed by the laws of physics, chemistry, and engineering.

After miniaturizing the transistor on an integrated circuit to a nanoscopic scale, transistors just can’t keep getting smaller every two years. With billions of electronic components etched into a solid, square wafer of silicon no more than 2 inches wide, you could count the number of atoms that make up the individual transistors.

So the era of classical computing is coming to an end, with scientists anticipating the arrival of quantum computing designing ambitious quantum algorithms that tackle maths greatest challenges an Algorithm for everything.

———–

Algorithms may be deployed without any human oversight leading to actions that could cause harm and which lack any accountability.

The issues the public sector deals with tend to be messy and complicated, requiring ethical judgements as well as quantitative assessments. Those decisions in turn can have significant impacts on individuals’ lives. We should therefore primarily be aiming for intelligent use of algorithm-informed decision making by humans.

If we are to have a ‘human in the loop’, it’s not ok for the public sector to become littered with algorithmic black boxes whose operations are essentially unknowable to those expected to use them.

As with all ‘smart’ new technologies, we need to ensure algorithmic decision making tools are not deployed in dumb processes, or create any expectation that we diminish the professionalism with which they are used.

Algorithms could help remove or reduce the impact of these flaws.


So where are we.

At the moment modern algorithms are some of the most important solutions to problems currently powering the world’s most widely used systems.

Here are a few. They form the foundation on which data structures and more advanced algorithms are built.

Google’s PageRank algorithm is a great place to start, since it helped turn Google into the internet giant it is today.

The PageRank algorithm so thoroughly established Google’s dominance as the only search engine that mattered that the word Google officially became a verb less than eight years after the company was founded. Even though PageRank is now only one of about 200 measures Google uses to rank a web page for a given query, this algorithm is still an essential driving force behind its search engine.

The Key Exchange Encryption algorithm does the seemingly impo

Backpropagation through a neural network is one of the most important algorithms invented in the last 50 years.

Neural networks operate by feeding input data into a network of nodes which have connections to the next layer of nodes, and different weights associated with these connections which determines whether to pass the information it receives through that connection to the next layer of nodes. When the information passed through the various so-called “hidden” layers of the network and comes to the output layer, these are usually different choices about what the neural network believes the input was. If it was fed an image of a dog, it might have the options dog, cat, mouse, and human infant. It will have a probability for each of these and the highest probability is chosen as the answer.

Without backpropagation, deep-learning neural networks wouldn’t work, and without these neural networks, we wouldn’t have the rapid advances in artificial intelligence that we’ve seen in the last decade.

Routing Protocol Algorithm (LSRPA) are the two most essential algorithms we use every day as they efficiently route data.

The two most widely used by the Internet, the Distance-Vector Routing Protocol Algorithm (DVRPA) and the Link-State traffic between the billions of connected networks that make up the Internet.

Compression is everywhere, and it is essential to the efficient transmission and storage of information.

Its made possible by establishing a single, shared mathematical secret between two parties, who don’t even know each other, and is used to encrypt the data as well as decrypt it, all over a public network and without anyone else being able to figure out the secret.

Searches and Sorts are a special form of algorithm in that there are many very different techniques used to sort a data set or to search for a specific value within one, and no single one is better than another all of the time. The quicksort algorithm might be better than the merge sort algorithm if memory is a factor, but if memory is not an issue, merge sort can sometimes be faster;

One of the most widely used algorithms in the world, but in that 20 minutes in 1959, Dijkstra enabled everything from GPS routing on our phones, to signal routing through telecommunication networks, and any number of time-sensitive logistics challenges like shipping a package across country. As a search algorithm, Dijkstra’s Shortest Path stands out more than the others just for the enormity of the technology that relies on it.

——–

At the moment there are relatively few instances where algorithms should be deployed without any human oversight or ability to intervene before the action resulting from the algorithm is initiated.

The assumptions on which an algorithm is based may be broadly correct, but in areas of any complexity (and which public sector contexts aren’t complex?) they will at best be incomplete.

Why?

Because the code of algorithms may be unviewable in systems that are proprietary or outsourced.

Even if viewable, the code may be essentially uncheckable if it’s highly complex; where the code continuously changes based on live data; or where the use of neural networks means that there is no single ‘point of decision making’ to view.

Virtually all algorithms contain some limitations and biases, based on the limitations and biases of the data on which they are trained.

 Though there is currently much debate about the biases and limitations of artificial intelligence, there are well known biases and limitations in human reasoning, too. The entire field of behavioural science exists precisely because humans are not perfectly rational creatures but have predictable biases in their thinking.

Some are calling this the Age of Algorithms and predicting that the future of algorithms is tied to machine learning and deep learning that will get better and better at an ever-faster pace. There is something on the other side of the classical-post-classical divide, it’s likely to be far more massive than it looks from over here, and any prediction about what we’ll find once we pass through it is as good as anyone else’s.

It is entirely possible that before we see any of this, humanity will end up bombing itself into a new dark age that takes thousands of years to recover from.

The entire field of theoretical computer science is all about trying to find the most efficient algorithm for a given problem. The essential job of a theoretical computer scientist is to find efficient algorithms for problems and the most difficult of these problems aren’t just academic; they are at the very core of some of the most challenging real world scenarios that play out every day.

Quantum computing is a subject that a lot of people, myself included, have gotten wrong in the past and there are those who caution against putting too much faith in a quantum computer’s ability to free us from the computational dead end we’re stuck in.

The most critical of these is the problem of optimization:

How do we find the best solution to a problem when we have a seemingly infinite number of possible solutions?

While it can be fun to speculate about specific advances, what will ultimately matter much more than any one advance will be the synergies produced by these different advances working together.

Synergies are famously greater than the sum of their parts, but what does that mean when your parts are blockchain, 5G networks, quantum computers, and advanced artificial intelligence?

DNA computing, however, harnesses these amino acids’ ability to build and assemble itself into long strands of DNA.

It’s why we can say that quantum computing won’t just be transformative, humanity is genuinely approaching nothing short of a technological event horizon.

Quantum computers will only give you a single output, either a value or a resulting quantum state, so their utility solving problems with exponential or factorial time complexity will depend entirely on the algorithm used.

One inefficient algorithm could have kneecapped the Internet before it really got going.

It is now oblivious that there is no going back.

The question now is there anyway of curtailing their power.

This can now only be achieved with the creation of an open source platform where the users control their data rather than it being used and mined.  (The uses can sell their data if the want.)

This platform must be owned by the public, and compete against the existing platforms like face book, twitter, what’s App, etc,   protected by an algorithm that protects the common values of all our lives – the truth. 

Of course it could be designed by using existing algorithms which would defeat its purpose. 

It would be an open net-work of people a kind of planetary mind that has to always be funding biosphere-friendly activities.

A safe harbour perhaps called the New horizon.   A digital United nations where the voices of cooperation could be heard.   

So if by any chance there is a human genius designer out there that could make such a platform he might change the future of all our digitalized lives for the better.   

All human comments appreciated. All like clicks and abuse chucked in the bin.

Contact: bobdillon33@gmail.com  

 

 

Share this:

  • Share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Share on Mastodon (Opens in new window) Mastodon
← Older posts
Newer posts →

All comments and contributions much appreciated

  • THE BEADY EYE SAYS. EQUALITY, FAIRNESS, JUSTICE ARE INDIVISIBLE CONCEPTS IF ARE ANYTHING. March 18, 2026
  • THE BEADY EYE SAYS. IT DOES MATTER WHAT YOU THINK ABOUT WAR WHETHER ITS JUSTIFIED OR NOT. March 17, 2026
  • THE BEADY EYE ASKS HOW ARE WE TO MAINTAIN HUMAN DIGNITY IN A WORLD DOMINATED BY TECHNOLOGY. March 15, 2026
  • THE BEADY EYE SAYS THANKS TO CURRENT TECHNOLOGIES WE ARE UNABLE TO BELIEVE ANYTHING WE SEE OR HEAR? March 15, 2026
  • THE BEADY EYE SAYS LET’S PUT THE IRAN/ ISRAEL/ USA WAR IN CONTEX. March 12, 2026

Archives

  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013

Talk to me.

Jason Lawrence's avatarJason Lawrence on THE BEADY EYE ASK’S: WIT…
benmadigan's avatarbenmadigan on THE BEADY EYE ASK’S: WHA…
bobdillon33@gmail.com's avatarbobdillon33@gmail.co… on THE BEADY EYE SAYS: WELCOME TO…
Ernest Harben's avatarOG on THE BEADY EYE SAYS: WELCOME TO…
benmadigan's avatarbenmadigan on THE BEADY EYE SAY’S. ONC…

7/7

Moulin de Labarde 46300
Gourdon Lot France
0565416842
Before 6pm.

My Blog; THE BEADY EYE.

My Blog; THE BEADY EYE.
bobdillon33@gmail.com

bobdillon33@gmail.com

Free Thinker.

View Full Profile →

Follow bobdillon33blog on WordPress.com

Blog Stats

  • 97,811 hits

Blogs I Follow

  • unnecessary news from earth
  • The Invictus Soul
  • WordPress.com News
  • WestDeltaGirl's Blog
  • The PPJ Gazette
Follow bobdillon33blog on WordPress.com
Follow bobdillon33blog on WordPress.com

The Beady Eye.

The Beady Eye.
Follow bobdillon33blog on WordPress.com

Blog at WordPress.com.

unnecessary news from earth

WITH MIGO

The Invictus Soul

The only thing worse than being 'blind' is having a Sight but no Vision

WordPress.com News

The latest news on WordPress.com and the WordPress community.

WestDeltaGirl's Blog

Sharing vegetarian and vegan recipes and food ideas

The PPJ Gazette

PPJ Gazette copyright ©

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy
  • Subscribe Subscribed
    • bobdillon33blog
    • Join 222 other subscribers
    • Already have a WordPress.com account? Log in now.
    • bobdillon33blog
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar