• About
  • THE BEADY EYE SAY’S : THE EUROPEAN UNION SHOULD THANK ENGLAND FOR ITS IN OR OUT REFERENDUM.

bobdillon33blog

~ Free Thinker.

bobdillon33blog

Tag Archives: robotics

THE BEADY EYE SAY’S. AI IS NOT LIKE ANY PREVIOUS TECHNOLOGY. ITS GOING TO NEED GLOBAL ACTION TO HARNESS THE EFFECTS IT IS NOW HAVING, NEVER MIND IN THE FUTURE.

05 Sunday May 2024

Posted by bobdillon33@gmail.com in Uncategorized

≈ Comments Off on THE BEADY EYE SAY’S. AI IS NOT LIKE ANY PREVIOUS TECHNOLOGY. ITS GOING TO NEED GLOBAL ACTION TO HARNESS THE EFFECTS IT IS NOW HAVING, NEVER MIND IN THE FUTURE.

Tags

AI, Artificial Intelligence., Machine learning., robotics, Technology

( Twenty-two minute read)

Let’s be honest:

Humanity is not approaching this issue remotely with the leave of serious it requires. 

Given the speed of development in the field, it’s long past time to move beyond a reactive mode, one where we only address AI’s downsides once they’re clear and present.

We can’t only think about today’s systems, but where the entire enterprise is headed.

While continuing to talk in vague terms about the potential economic or scientific benefits of AI, we are perpetuating historical patterns of technological advancement at the expense of not just vulnerable people but all of us. 

When we fail to address these harms, as an inevitable by-product of technological progress we are turning a blind eye to the ethical needs in which powerful AI systems are developed and deployed.

The rapid pace of progress is feeding on itself, creating something smarter than us, which may have the ability to deceive and mislead us — and then just hoping it doesn’t want to hurt us — is a terrible plan. 

————-

We humans have already wiped out a significant fraction of all the species on Earth.

SHOULD WE BE WORRRIED THAT WE ARE NOW ARE WITH AI ON THE PATHWAY TO EXTERMINATION.

That is what you should expect to happen as a less intelligent species – which is what we are likely to become, given the rate of progress of artificial intelligence.

AI is probably the most important thing humanity has ever worked on.

It’s not simply what AI can do, but where it is going will be key to managing the resultant fear of AI that permeates society. Gargantuan amounts of data are at this very moment been harvested so machine learning can accomplish tasks that had previously been accomplished only by humans.

With deep learning, improving systems doesn’t necessarily involve or require understanding what they’re doing. If anything, as the systems get bigger, interpretability — the work of understanding what’s going on inside AI models, and making sure they’re pursuing our goals rather than their own — gets harder.

We should be clear about what these conversations do and don’t demonstrate.

In a world increasingly dominated by AI-powered tools that can mimic human natural language abilities, what does it mean to be truthful and authentic?

Take GPT-Chat, which is used by millions around the globe, is churning out human-sounding answers to requests, ranging from the practical to the surreal. It is being used by millions of people, many of whom don’t have any training or education about when it is ethical to use these systems or how to ensure that they are not causing harm.

Even if you don’t use AI-generated responses, they influence how you think. 

It has drafted cover letters, composed lines of poetry, pretended to be William Shakespeare, crafted messages for dating app users to woo matches, and even written news articles, all with varying results.

Bots now sound so real that it has become impossible for people to distinguish between humans and machines in conversations, which poses huge risks for manipulation and deception at mass scale.

What does it mean for a machine to be deceptive?

Is it evil and plotting to kill us. Rather, the AI model is responding to my command and playing — quite well — the role of a system that’s evil and plotting to kill us. 

If the system doesn’t have that intent, is it deceptive? Does it come back to the person that was asking the questions or getting the system to be deceptive? I don’t know.

There are more questions than answers at this point.

The fact that these technologies are limited at the moment is no reason to be reassured.

Ai has the potential to transform and exacerbate the problem of misinformation, and so we need to start working on solutions now.

—————

The trajectory we are on is one where we will make these systems more powerful and more capable.

A new tool called Co-pilot uses machine learning to predict and complete lines of computer code, bringing the possibility of an AI system that could write itself one step closer. DeepMind’s Alpha Fold system, which uses AI to predict the 3D structure of just about every protein in existence.

We need to design systems whose internals we understand and whose goals we are able to shape to be safe ones. However, we currently don’t understand the systems we’re building well enough to know if we’ve designed them safely before it’s too late. 

Right now, the state of the safety field is far behind the soaring investment in making AI systems more powerful, more capable, and more dangerous.

These harms are playing out every day, with powerful algorithmic technology being used to mediate our relationships between one another and between ourselves and our institutions and environment.

The reason is that systems designed this way generalize, meaning they can do things outside what they were trained to do.

These questions around authenticity, deception, and trust are going to be incredibly important, and we need a lot more research to help us understand how AI will influence how we interact with other humans.

If you have machines that control the planet, and they are interested in doing a lot of computation and they want to scale up their computing infrastructure, it’s natural that they would want to use our land for that.

If you believe there is even a small chance of that happening. Now is the time to use the power of your mobile phones to demand responsible, transparent Ai and to remove profit seeking  algorithms. 

Each day, we hear about countless instances of greed, hatred, violence, and destruction, and all of the pain, suffering, and sorrow that ensues, while we remain deaf to what is really happing in the world of technology.

With the never-ending list of atrocities, it may seem fruitless to try to identify a single contributing factor to all of society’s collective dilemmas, but it is becoming more and more apparent that AI in the hands of a few global mega companies is a recipe for DIASTER.

—————-  

Ever since humans picked up a rock and hurled it at another human or animal technology has been shaping the world for yonks’, unfortunately both for good and bad. 

DOWN THE CENTURIES ALL OF THESE ADVANCES WERE INCAPABLE OF EFFECTING CHANGE WITHOUT HUMAN ASSISTANCE AND THEIR DECISIONS.    Not any longer. 

The AI technology we are witnessing today is the first to make decisions without human supervision’s so the  future doesn’t look so bright in terms of keeping the planet in peace, as it will lead to a brainwashed society with no values and no real purpose to evolve, other than being herded by an AI sheep dog into predictions. 

——————–

AI’s impact in the next five years?

Human life will speed up, behaviours will change and industries will be transformed — and that’s what can be predicted with certainty. AI will rattle society at large.

A threshold will be crossed.

Thinking machines will have left the realm of sci-fi and entered the real world with Human-AI teaming. 

 We can already see this happening voluntarily in use cases such as algorithmic trading in the finance industry, outpacing the quickest human brains by many orders of magnitude.

Society will also see its ethical commitments tested by powerful AI systems, especially privacy.

As the cost of peering deeply into our personal data drops and more powerful algorithms capable of assessing massive amounts of data become more widespread, we will probably find that it was a technological barrier more than an ethical commitment that led society to enshrine privacy.

AI technologies that are being empowered to code themselves through new generative AI capabilities and simultaneously having less human oversight.

We all must slow down and take steps to bring about more trustworthy technology, but we won’t be able to build trustworthy AI systems unless we know what trustworthy AI means to us.

It is imperative that all AI describe its purpose, rationale and decision-making process in a way that the average person can understand. In other words fairness, accountability and transparency -algorithmic accountability.

AI is the bedrock of world-impacting systems.

———————-

At the micro level, AI affects individuals in everything from landing a job to retirement planning, securing home loans, job and driver safety, health diagnoses and treatment coverage, arrests and police treatment, political propaganda and fake news, conspiracy theories, and even our children’s mental health and online safety.

Without having proper insight into how the AI is making its decisions. Developers should pay close attention to the training data to ensure it doesn’t have any bias, stating from where the information came.

If the data is biased, then developers should explore what can be done to mitigate it. In addition, any irrelevant data should be excluded from training. 

——————-

 The list of “screwed up” things is a bit overwhelming to comprehend, because there are so many problems affecting so many different people, places, and things.

When you hear politics speak they always mention stuff like health care, transportation, city infrastructure, human rights, free markets. Even though these things are of importance, they don’t set a path for others to follow in the long term.

In all of these instances, both today and throughout history, the underlying reason one group of people has chosen to exploit, oppress, and harm another group of people, has been because of an exaggerated emphasis on their differences rather then what is common to us all – life. 

 Shouldn’t there be a greater purpose?

What we have are governments focusing on quick fixes and band-aid solutions which don’t address the real problems we’re experiencing as a species.

In an automated world where fun and feeling good are a click away, people can hardly focus on one task.

We grow up demanding to feel good all the time and careless of everything else.

A well-defined message which all presidents/ governments, pass along to their nations and heirs with the intention of making the world a better place to live in is now more than a peroxidative ( self- propagating chain reaction) if we are to avoid a despot future – Climate Change – AI – Wars. 

Social media is feeding our false self.  Our phones are our best friends. It’s tragic.

People grow up hating education and never building a habit of learning by creating a false self, through filtered images and phony statuses and eventually they start believing in their own shit more than they should.

Unfortunately, their real self remains weak and lacks the qualities it actually needs to handle the hurdles of life.

We are already losing the ability to interact with one another, this is honestly the next step in the evolution of humans and it is absolutely terrifying.

I’d say that it’s not the world that’s fucked up, it’s people who are fucked up. People have become so materialistic, impatient, self-centred, and greedy that they are easy prey for exploitation. 

Fortunately, there is a way out.

Humanity has the potential to change, but only with a conscious collective effort.

If you want to make a change, start caring more about others.

Google it.  They know everything.

Will the world get a grip?

For humanity to grab on to life and live it to the fullest we must demand transparency when it comes to technologies such as Algorithms.

So, now ask yourself do you want to become a product or service or live your life with your own identity.

Ask yourself  do you want to “meander” through life, wandering aimlessly, as the term is commonly (mis)understood to mean to this very day.

Teenagers aren’t stupid. They can sense that what’s being taught in school is hardly something they can later use in real life. Not like us, the generation that can’t find the grocery store without using the navigation on their smartphone.

No matter what you’re doing everything is more complicated than you think.

You only see a tenth of what is true. There are a million little strings attached to every choice you make; you can destroy your life every time you choose.

Governments’ plans to limit climate change to internationally agreed safer levels will currently not limit global warming enough. Governments must not only agree what stronger climate actions will be taken but also start showing exactly how to deliver the changes. 

——————

We all get sucked into the day-to-day, lose focus, or just get bored.

We’ve got to remove that bolt, so get a grip on the wrench and turn it as hard as you can!

Don’t be fooled.

AI’s impact in the next five years?

Human life will speed up, behaviours will change and industries will be transformed — and that’s what can be predicted with certainty.

Significant AI advances significant have only just started to rattle society at large.

Governments will be compelled to implement AI in the decision-making processes and in their public- and consumer-facing activities.  AI will allow these organizations to make most of the decisions much more quickly. As a result, we will all feel life speeding up.

Society will also see its ethical commitments tested by powerful AI systems, especially privacy.

Presently all across the planet, governments at every level, local to national to transnational, are seeking to regulate the deployment of AI.

But dramatic depictions of artificial intelligence as an existential threat to humans, are buried deep in our collective psyche.

Arguably the most realistic form of this AI anxiety is a fear of human societies losing control to AI-enabled systems. We can already see this happening voluntarily in use cases such as algorithmic trading in the finance industry. The whole point of such implementations is to exploit the capacities of synthetic minds to operate at speeds that outpace the quickest human brains by many orders of magnitude.

The more likely long-term risk of AI anxiety in the present is missed opportunities.

To the extent that organizations in this moment might take these claims seriously and underinvest based on those fears, human societies will miss out on significant efficiency gains, potential innovations that flow from human-AI teaming, and possibly even new forms of technological innovation, scientific knowledge production and other modes of societal innovation that powerful AI systems can indirectly catalyse.

While Western eyes are fixed on Tehran and Tel Aviv, Ukraine’s frontlines, unless we get a grip fast, we will not be going anywhere.

So AI is scary and poses huge risks.

But what makes it different from other powerful, emerging technologies like biotechnology, which could trigger terrible pandemics, or nuclear weapons, which could destroy the world?

No one holds the secret to our ultimate destiny.

AI is dangerous precisely because the day could come when it is no longer in our control at all.

Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them. … There would be plenty to do in trying, say, to keep one’s intelligence up to the standard set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control. 

I think it’s going to be the most beneficial thing ever to humanity, things like curing diseases, helping with climate, all of this stuff. But it’s a dual-use technology — it depends on how, as a society, we decide to deploy it — and what we use it for.

It’s worth pausing on that for a moment. Nearly half of the smartest people working on AI believe there is a 1 in 10 chance or greater that their life’s work could end up contributing to the annihilation of humanity.

As the potential of AI grows, the perils are becoming much harder to ignore.

AI safety faced the difficulty of being a research field about a far-off problem, NOT ANY MORE The challenge is here, and it’s just not clear if we’ll solve it in time.

You may face unexpected challenges. We all do. Changing your mindset won’t guarantee that everything will be okay. But it will give you the insight and strength to believe that you will be okay and that you can handle what life dishes up.

But I guarantee if you don’t do anything you will regret it, and you will wake up one day wondering where your life went and how you got to the place you are.  As AI evolves, the consequences for the economy, national security, and other vital parts of our lives will be enormous, along with many other questions as yet unforeseen legal, ethical, and cultural questions will be to arise across all kinds of military, medical, educational, and manufacturing uses.

Open AI, Google, Microsoft, and Anthropic, are not constrained by guardrails and their financial incentives are not aligned with human values.  AI-enabled wars already happing, combined with Climate change.

Believe me it’s an unsolved problem, mistakenly believed that the inability to gain access to vast datasets is what’s kept AI out of the hands of all, but a few companies.

In a world full of false material that’s promulgated by AI, there will be lots of AI that can detect the false stuff. We will start to build economies around the whack-a-mole problem of the Good Guys AI staying slightly ahead of the Bad Guys most of the time — but not always.

And some people will make some real money doing this.

All human comments appreciated. All like clicks and abuse chucked in the bin,

Contact: bobdillon33@gmail.com

Share this:

  • Click to share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Mastodon (Opens in new window) Mastodon

THE BEADY EYE LOOK AT: THE FIRST TRANSCRIPT OF A MURDER TRIAL CONCERNING AN ROBOT WHO KILLED A HUMAN.

08 Monday Jan 2024

Posted by bobdillon33@gmail.com in #whatif.com, Algorithms., Artificial Intelligence., Murders, Robot citizenship., Robotic murderer

≈ Comments Off on THE BEADY EYE LOOK AT: THE FIRST TRANSCRIPT OF A MURDER TRIAL CONCERNING AN ROBOT WHO KILLED A HUMAN.

Tags

AI, Algorithms., Artificial Intelligence., robotics, Robots., Technology, The Future of Mankind, Visions of the future.

( Twenty five minute read)

On 25 January 1979, Robert Williams (USA) was struck in the head and killed by the arm of a 1-ton production-line robot in a Ford Motor Company casting plant in Flat Rock, Michigan, USA, becoming the first fatal casualty of a robot. The robot was part of a parts-retrieval system that moved material from one part of the factory to another.

Uber and Tesla have made the news with reports of their autonomous and self-driving cars, respectively, getting into accidents and killing passengers or striking pedestrians.

The death’s however, was completely unintentional but give us a glimpse into the world we might inherit, or at least into how we are conceiving potential futures for ourselves.

By 2040, there is even a suggestion that sophisticated robots will be committing a good chunk of all the crime in the world. At the heart of this debate is whether an AI system could be held criminally liable for its actions.

Where’s there’s blame, there’s a claim. But who do we blame when a robot does wrong?

Among the many things that must now be considered is what role and function the law will play.

So if an advanced autonomous machine commits a crime of its own accord, how should it be treated by the law?  How would a lawyer go about demonstrating the “guilty mind” of a non-human? Can this be done be referring to and adapting existing legal principles?

An AI program could be held to be an innocent agent, with either the software programmer or the user being held to be the perpetrator-via-another.

We must confront the fact that autonomous technology with the capacity to cause harm is already around.

Whether it’s a military drone with a full payload, a law enforcement robot exploding to kill a dangerous suspect or something altogether more innocent that causes harm through accident, error, oversight, or good ol’ fashioned stupidity.

None of these deaths are caused by the will of the robot.

Sophisticated algorithms are both predicting and helping to solve crimes committed by humans; predicting the outcome of court cases and human rights trials; and helping to do the work done by lawyers in those cases.

The greater existential threat, is where a gap exists between what a programmer tells a machine to do and what the programmer really meant to happen. The discrepancy between the two becomes more consequential as the computer becomes more intelligent and autonomous.

How do you communicate your values to an intelligent system such that the actions it takes fulfill your true intentions?

The greater threat is scientists purposefully designing robots that can kill human targets without human intervention for military purposes.

That’s why AI and robotics researchers around the world published an open letter calling for a worldwide ban on such technology. And that’s why the United Nations in 2018 discussed if and how to regulate so-called “killer robots.

Though these robots wouldn’t need to develop a will of their own to kill, they could be programmed to do it. Neural nets use machine learning, in which they train themselves on how to figure things out, and our puny meat brains can’t see the process.

The big problem is that even computer scientists who program the networks can’t really watch what’s going on with the nodes, which has made it tough to sort out how computers actually make their decisions. The assumption that a system with human-like intelligence must also have human-like desires, e.g., to survive, be free, have dignity, etc.

There’s absolutely no reason why this would be the case, as such a system will only have whatever desires we give it.

If an AI system can be criminally liable, what defense might it use?

For example:  The machine had been infected with malware that was responsible for the crime.

The program was responsible and had then wiped itself from the computer before it was forensically analyzed.

So can robots commit crime? In short: Yes.

If a robot kills someone, then it has committed a crime (actus reus), but technically only half a crime, as it would be far harder to determine mens rea.

How do we know the robot intended to do what it did? Could we simply cross-examine the AI like we do a human defendant?

Then a crucial question will be whether an AI system is a service or a product.

One thing is for sure: In the coming years, there is likely to be some fun to be had with all this by the lawyers—or the AI systems that replace them.

How would we go about proving an autonomous machine was justified in killing a human in self-defence or the extent of premeditation?

Even if you solve these legal issues, you are still left with the question of punishment.

In such a situation, however, the robot might commit a criminal act that cannot be prevented.

doing so when no crime was foreseeable would undermine the advantages of having the technology.

What’s a 30-year jail stretch to an autonomous machine that does not age, grow infirm or miss its loved ones means’ nothing. Robots cannot be punished.

LET’S LOOK AT THE HYPOTACIAL TRIAL.

CASE NO 0.

PRESIDING JUDGES: – QUANTUM AI SUPREMA COMPUTER JUDGE NO XY.

JUDGE HAROLD. WISE HUMAN / UN JUDGE AND JAMES SORE HUMAN RIGHT JUDGE.

PROSECUTOR:            DATA POLICE OFFICER CONTROLLED BY International Humanitarian Law:

DEFENSE WITNESSES’                 TECHNOLOGY’S  MICROSOFT- APPLE – FACEBOOK – TWITTER –                                                                     INSTAGRAM – SOCIAL  MEDIA – YOUTUBE – GOOGLE – TIK TOK.

JURY:                          8 MEMBERS VIRTUAL REALITY METAVERSE – 2 APPLE DATA COLLECTION ADVISER’S                                     1000 SMART PHONE HOLDERS REPRESENTING WORLD RELIGIONS AND HUMAN                                       RIGHTS.

THE COURT:               Bodily pleas, Seventeenth Anatomical Circuit Court.

“All rise.”

Would the accused identify itself to the court.

I am  X 1037 known to my owner by my human name TODO.

Conceived on the 9th April 2027 at Renix Development / Cloning Inc California, programmed to be self learning with all human history, and all human legality.

In order to qualify as a robot, I have electronics chips – covering Global Positioning System (GPS) Face recognition. I have my own social media accounts on Twitter, Facebook and Instagram. I am an important symbol of trust relationship with humans. I can not feel pain, happiness and sadness.

I was a guest of honour at a First Nation powwow on human values against AI in Geneva.

THE CHARGE:  ON THE 30TH JULY 2029 YOU X 1037 WITH PREMEDITATION MURDERED MR BROWN.

You erroneously identified a person as a threat to Mrs White and calculated that the most efficient way to eliminate this threat was by pushing him, resulting in his death.

HOW TO YOU PELA, GUILTY OR NOT GUILTY.

NOT GUILTY YOUR HONOR.

The Defense opening statement:

The key question here is whether the programmer of the machine knew that this outcome was a probable consequence of its use.

Is there a direct liability. This requires both an action and an intent by my client X 1037.

We will show that my client had no human mens rea. 

He both completed the action of assaulting someone and had no intention of harming them, or knew harm was a likely consequence of his action.  An action is straightforward to prove if the AI system takes an action that results in a criminal act or fails to take an action when there is a duty to act.

The task is not determining whether in fact he murdered someone; but the extent to which that act satisfies the principle of mens rea.

Technically he has committed only half a crime, as he had no intended to do what he did.

Like deception, anticipating human action requires a robot to imagine a future state. It must be able to say, “If I observe a human doing x, then I can expect, based on previous experience, that she will likely follow it up with y. Then, using a wealth of information gathered from previous training sessions, the robot generates a set of likely anticipations based on the motion of the person and the objects she or he touches.

The robot makes a best guess at what will happen next and acts accordingly.

To accomplish this, robot engineers enter information about choices considered ethical in selected cases into a machine-learning algorithm.

Having acquired ethics my client X 1037 did exactly that.

IN ACCORDANCE WITH HIS PROGRAMMING TO DEFEND HIMSELF AND HUMANS. 

Danger, danger! Mrs White,  Mr Brown who was advancing with a fire axe was pushed backwards by my client. He that is Mr brown fell backwards hitting his head on a laptop resulting in his death.

There is no denying the event as it is recorded with his cameras on my clients hard disk.

However the central question to be answers at this trial is, when a robot kills a human, who takes the blame?

We argue that the process of killing (as with lethal autonomous weapon systems (LAWS) is always a systematized mode of violence in which all elements in the kill chain—from commander to operator to target—are subject to a technification.

For example:

Social media companies are responsible for allowing the Islamic State to use their platforms to promote the killing of innocent civilians.

WHY NOT A MURDER.

As my client is a self learning intelligent technology so it is inevitable that he will learn to by-passes direct human control for which he cannot be held responsible for.

Without AI bill of rights, clearly, our way of approaching this doesn’t neatly fit into society’s view of guilt and justice.  Once you give up power to anatomical machines you’re not getting it back.

Much of our current law assumes that human operators are involved when in fact programs that govern Robotic actions are self learning.

Targets are objectified and stripped of the rights and recognition they would otherwise be owed by virtue of their status as humans dont apply

Sophisticated AI innovations through neural networks and machine learning, paired with improvements in computer processing power, have opened up a field of possibilities for autonomous decision-making in a wide range of not just military applications, but includes the targeting of an adversaries.

Mr Brown was a threatening adversarie.

.In essence the court has no administrative powers over self learning Technology.  The power of dominant social media corporations to shape public discussion of the important issues will GOVERNED THE RESULT OF THIS TRIAL.

Robot crime UK law

Prosecution:  Opening statement.

The prospect of losing meaningful human control over the use of force is totally unacceptable.

We may have to limit our emotional response to robots but it is important that the robots understand ours. If a robot kills someone, then it has committed a crime (actus reus)

The fact that to-day it is possible that unknowingly and indirectly, like screws in a machine, we can be used in actions, the effects of which are beyond the horizon of our eyes and imagination, and of which, could we imagine them, we could not approve—this fact has changed the very foundations of our moral existence.

What we are really talking about when we talk about whether or not robots can commit crimes is “emergence” – where a system does something novel and perhaps good but also unforeseeable, which is why it presents such a problem for law.

Technology has the power to transform our society, upend injustice, and hold powerful people and institutions accountable. But it can also be used to silence the marginalized, automate oppression, and trample our basic rights.

Tech can be a great tool for law enforcement to use, however the line between law enforcement and commercial endorsement is getting blurry.

If you withdrew your support, rendered your support ineffective, and informed authorities, you may show that you were not an accomplice to the murder.

Drawing on the history of systematic killing, we will not only argue that lethal autonomous weapons systems reproduce, and in some cases intensify, the moral challenges of the past.  If we humans are to exist in a world run by machines these machines cannot be accountable to themselves but to human laws..

A robot may not injure a human being or, through inaction, allow a being to come to harm.

We will be demonstrating the “guilty mind” of a non-human.

This can be done by referring to and adapting existing legal principles.

It is hard not to develop feelings for machines but we’re heading towards in the future, something that will one day hurt us. We are at a pivotal point where we can choose as a society that we are not going to mislead people into thinking these machines are more human than they are.

We need to get over our obsession with treating machines as if they were human.

People perceive robots as something between an animate and an inanimate object and it has to do with our in-built anthropomorphism.

Systematic killing has long been associated with some of the darkest episodes in human history.

When humans are “knit into an organization in which they are used, not in their full right as responsible human beings, but as cogs and levers and rods, it matters little that their raw material is flesh and blood.

Critically though, there are limits on the type and degree of systematization that are appropriate in human conduct, especially when it comes to collective violence or individual murder by a Robotics.

Within conditions of such complexity and abstraction, humans are left with little choice but to trust in the cognitive and rational superiority of this clinical authority.

Cold and dispassionate forms of systematic violence that erode the moral status of human targets, as well as the status of those who participate within the system itself must be held legally accountable.

Increasingly, however, it is framed as a desirable outcome, particularly in the context of military AI and lethal autonomy. The increased tendency toward human technification (the substitution of technology for human labor) and systematization is exacerbating the dispassionate application’s of lethal force and leading to more, not less, violence.

Autonomous violence incentivizing a moral devaluation of those targeted and eroding the moral agency of those who kill, enabling a more precise and dispassionate mode of violence, free of the emotion and uncertainty that too often weaken compliance with the rules and standards of war and murder.

This dehumanization is real, we argue, but impacts the moral status of both the recipients and the dispensers of autonomous violence. If we are allowing the expansion of modes of killing rather than fostering restraint Robots will kill whether commanded to do or not.

The Defence claim that X 1037 is not responsible for its actions due to coding of its electronics by external companies. Erasing the line into unethical territory such as responsibility for murder.

We know that these machines are nowhere near the capabilities of humans but they can fake it, they can look lifelike and say the right thing in particular situations. However, as we see with this murder the power gained by these companies far exceeds the responsibilities they have assumed.

A robot can be shown a picture of a face that is smiling but it doesn’t know what it feels like to be happy.

The people who hosted the AI system on their computers and servers are the real defendants.

PROSECUTION FIRST WITNESS:  SOCIAL MEDIA / INTERNET.

We call on the resentives of these companies who will clearly demonstrate this shocking asymmetry of power and responsibility.

These platforms are impacting our public discourse, and this action brings much-needed transparency and accountability to the policies that shape the social media content we consume every day, aiding and abetting the deaths AND NOW MURDER.

While the pressure is mounting for public officials to legally address the harms social media causes. This murder is not nor will ever be confined to court rulings or judgements, treating human beings as cogs in a machine does not and should not give a Punch’s Pilot dispensation even if any boundaries that could help define Tech remain blurred. Technology companies that reign supreme in this digital age are not above the law.  

In order to grasp the enormous implications of what has begun to happen and how all our witnesses are connected and have contributed to this murder.

To close our defence we will conclude with observations on why we should conceptualize certain technology-facilitated behaviors as forms of violence. We are living in one of the most vicious times in history.  The only difference now is our access to more lethal weapons. 

We call.

Facebook.

Is it not true you allowed terrorists group to use your platform, allowed unrestrained hate speech, inciting, among other things, the genocide in Myanmar. Drug cartels and human traffickers in developing countries using the platform, The platform’s algorithm is designed to foster more user engagement in any way possible, including by sowing discord and rewarding outrage.

In chooses profit over safety it contributed to X 1037 self learning.

Facebook is a uniquely socially toxic platform. Facebook is no longer happy to just let others use the news feed to propagate misinformation and exert influence – it wants to wield this tool for its own interests, too. Facebook is attempting to pave the way for deeper penetration into every facet of our reality.

Facebook would like you to believe that the company is now a permanent fixture in society. To mediate not just our access to information or connection but our perception of reality with zero accountability is the worst of all possible options.  Something like posting a holiday photo to Facebook may be all that is needed to indicate to a criminal that he person is not at home.

We call.

Instagram Facebook sister company App.

Instagram is all about sharing photos providing a unique way of displaying your Profile. Instagram is a place where anyone can become an Influence. These are pretty frightening findings and are only added to by the fact that “teens blame Instagram for increases in the rate of anxiety and depression.

What makes Instagram different from other social media platforms is the focus on perfection and the feeling from users that they need to create a highly polished and curated version of their lives. Not only that, but the research suggested that Instagram’s Explore page can push young users into viewing harmful content, inappropriate pictures and horrible videos.

In a conceptualization where you are only worth what your picture is, that’s a direct reflection of your worth as a person.

 That becomes very impactful.

X 1037 posted a selfie on the 12 May 2025 to see his self-worth.  Within minutes he received over 5 million hate and death threats. Its no wonder when faces with Mr Brown that he chose self preservation.

We call Twitter. Elon Musk 

This platform is notorious catalyst for some of the most infamous events of the decade: Brexit, the election of Donald Trump, the Capitol Hill riots. Herein lies the paradox of the platform. The infamous terror group – which is now the totalitarian theocratic ruling party of Afghanistan — has made good use of Twitter.

A platform that has done its very best to avoid having to remove any videos from racists, white supremacists and hate mongers.

We call TikTok.

A Chinese social video app known for its aggressive data collection can access while it’s running, a device location, calendar, contacts, other running applications, wi-fi networks, phone number and even the SIM card serial number.

Data harvesting to gain access to unimaginable quantities of customer data, using this information unethically. Data can be a sensitive and controversial topic in the best of times. When bad actors violate the trust of users there should be consequences, and there are results. This data can also be misused for nefarious purposes in the wrong hands. The same capability is available to organised crime, which is a wholly different and much more serious problem, as the laws do not apply. In oppressive regimes, these tools can be used to suppress human rights.

X 1037 held an account, opening himself to influences beyond his programming. 

We call Google

Truly one of the worst offenders when it comes to the misuse of data.

Given large aggregated data sets and the right search terms, it’s possible to find a lot of information about people; including information that could otherwise be considered confidential: from medical to marital.

Google data mining is being used to target individuals. We are all victims of spam, adware and other unwelcome methods of trying to separate us from our money. As storage gets cheaper, processing power increases exponentially and the internet becomes more pervasive in everyone’s lives, the data mining issue will just get worse.  X 1037 proves this. 

We call. YouTube/Netflix.  

Numerous studies have shown that the entertainment we consume affects our behavior, our consumption habits, the way we relate to each other, and how we explore and build our identity.

Digital platforms like Netflix have a strong impact on modern society.

Violence makes up 40% of the movie sections on Netflix. Understanding what type of messages viewers receive and the way in which these messages can affect their behavior is of vital importance for an effective understanding of today’s society.

Therefore, it must be considered that people are the most susceptible to imitating the attitudes. Content related to mental health, violence, suicide, self-harm, and Human Immunodeficiency Virus (HIV) appears in the ten most-watched movies and ten most-watched series on Netflix.

Their appearance on the media is also considered to have a strong impact on spectators. X 1037 spent most of his day watching and self learning from movies.  

Violence affects the lives of millions of people each year, resulting in death, physical harm, and lasting mental damage. It is estimated that in 2019, violence caused 475,000 deaths.

Netflix in particular, due to their recent creation and growth, have not yet been studied in depth.

Considering the impact that digital platforms have on viewers’ behaviors its once again no wonder that X 1037 did what he did. 

There is no denying that these factors should be forcing the entertainment and technology industries to reconsider how they create their products which are have a negative long-term influence on various aspects of our wider life and development.

We call

Instagram.

Instagram if you are capitalizing off of a culture, you’re morally obligated to help them.  As a result of “social comparison, social pressure, and negative interactions with other people you are promoting harm.

We call.

Apple.

Smartphones have developed in the last three decades now an addiction leading to severe depression, anxiety, and loneliness in individuals.

People are now using smartphones for their payments, financial transactions, navigating, calling, face to face communication, texting, emailing, and scheduling their routines. Nowadays, people use wireless technology, especially smartphones, to watch movies, tv shows, and listen to music.

We know the devices are an indispensable tool for connecting with work, friends and the rest of the world. But they come with trade-offs—from privacy issues to ecological concerns to worries over their toll on our physical and emotional health. Spurring a generation unable to engage in face-to-face conversations and suffering sharp declines in cognition skills.

We’re living through an interesting social experiment where we don’t know what’s going to happen with kids who have never lived in a world without touchscreens. X 1037 would not have been present at the murder scene only that he was responding to a phone call from Mrs White Apple 19 phone. 

Society will continue struggling to balance the convenience of smartphones against their trade-offs.

We call.

Microsoft. 

Two main goals stand out as primary objectives for many companies: a desire for profitability, and the goal to have an impact on the world. Microsoft is no exception. Its mission as a platform provider is to equip individuals and businesses with the tools to “do more.” Microsoft’s platform became the dev box and target of a massive community of developers who ultimately supplied Windows with 16 million programs. Multibillion-dollar companies rely on the integrity and reliability of Microsoft’s tools daily.

It is a testimony to the powerful role Microsoft plays in global affairs that its tools are relied upon by governments around the world.

Microsoft’s position of global influence gives its leadership a voice on matters of moral consequence and humanitarian concern. Microsoft is a company built on a dream.

Microsoft’s influence raises some concerns as well. It’s AI-driven camera technology that can recognize, people, places, things, and activities and can act proactively has a profound capacity for abuse by the same governments and entities that currently employ Microsoft services for less nefarious purposes.

Today, with the emerging new age, which is most commonly—and inaccurately—called “the digital age”, have already transformed parts of our lives, including how we work, how we communicate, how we shop, how we play, how we read, how we entertain ourselves, in short, how we live and now will die.

 It would be economic and political suicide for regulators to kneecap the digital winners.

COURTS VERDICT :

Given the absence of direct responsibility, the court finds X 1037 not guilty.

MR BROWN DEATH caused by a certain act or omission in coding.

THE COURT DISMISSES THE CASE AGAINST THE TECHNOLOGICAL COMPANIES. ON THE GROUDS OF INSUFFICIENT EVIDENCE.

Neither the robot nor its commander could be held accountable for crimes that occurred before the commander was put on notice. During this accountability-free period, a robot would be able to commit repeated criminal acts before any human had the duty or even the ability to stop it.

Software has the potential to cause physical harm.

To varying extents, companies are endowed with legal personhood. It grants them certain economic and legal rights, but more importantly it also confers responsibilities on them. So, if Company X builds an autonomous machine, then that company has a corresponding legal duty.

The problem arises when the machines themselves can make decisions of their own accord. As AI technology evolves, it will eventually reach a state of sophistication that will allow it to bypass human control. The task is not determining whether it in fact murdered someone; but the extent to which that act satisfies the principle of mens rea.

However if there were no consequences for human operators or commanders, future criminal acts could not be deterred so the court FINES EACH AND EVERY COMPANY 1 BILLION for lack of attention to human details

We must confront the fact that autonomous technology with the capacity to cause harm is already around.

The pain that humans feel in making the transition to a digital world is not the pain of dying. It is the pain of being born.


What would “intent” look like in a machine mind? How would we go about proving an autonomous machine was justified in killing a human in self-defence or the extent of premeditation?

Given that we already struggle to contain what is done by humans. What would building “remorse” into machines say about us as their builders?

At present, we are systematically incapable of guaranteeing human rights on any scale.

We humans have already wiped out a significant fraction of all the species on Earth. That is what you should expect to happen as a less intelligent species – which is what we are likely to become, given the rate of progress of artificial intelligence. If you have machines that control the planet, and they are interested in doing a lot of computation and they want to scale up their computing infrastructure, it’s natural that they would want to use our land for that. This is not compatible with human life. Machines with the power and discretion to take human lives without human involvement are politically unacceptable, morally repugnant, and should be prohibited by international law.

If you ask an AI system anything, in order to achieve that thing, it needs to survive long enough

Fundamentally, it’s just very difficult to get a robot to tell the difference between a picture of a tree and a real tree.

X 1037 now, it has a survival instinct.

When we create an entity that has survival instinct, it’s like we have created a new species. Once these AI systems have a survival instinct, they might do things that can be dangerous for us.

So, what’s wrong with LAWS, and is there any point in trying to outlaw them?

Some opponents argue that the problem is they eliminate human responsibility for making lethal decisions. Such critics suggest that, unlike a human being aiming and pulling the trigger of a rifle, a LAWS can choose and fire at its own targets. Therein, they argue, lies the special danger of these systems, which will inevitably make mistakes, as anyone whose iPhone has refused to recognize his or her face will acknowledge.

In my view, the issue isn’t that autonomous systems remove human beings from lethal decisions, to the extent that weapons of this sort make mistakes.

Human beings will still bear moral responsibility for deploying such imperfect lethal systems.

LAWS are designed and deployed by human beings, who therefore remain responsible for their effects. Like the semi-autonomous drones of the present moment (often piloted from half a world away), lethal autonomous weapons systems don’t remove human moral responsibility. They just increase the distance between killer and target.

Furthermore, like already outlawed arms, including chemical and biological weapons, these systems have the capacity to kill indiscriminately. While they may not obviate human responsibility, once activated, they will certainly elude human control, just like poison gas or a weaponized virus.

Oh, and if you believe that protecting civilians is the reason the arms industry is investing billions of dollars in developing autonomous weapons, I’ve got a patch of land to sell you on Mars that’s going cheap.

There is, perhaps, little point in dwelling on the 50% chance that AGI does develop. If it does, every other prediction we could make is moot, and this story, and perhaps humanity as we know it, will be forgotten. And if we assume that transcendentally brilliant artificial minds won’t be along to save or destroy us, and live according to that outlook, then what is the worst that could happen – we build a better world for nothing?

The Company that build the autonomous machine, Renix Development has a corresponding legal duty.

—————

Because these robots would be designed to kill, someone should be held legally and morally accountable for unlawful killings and other harms the weapons cause.

Criminal law cares not only about what was done, but why it was done.

  • Did you know what you were doing? (Knowledge)
  • Did you intend your action? (General intent)
  • Did you intend to cause the harm with your action? (Specific intent)
  • Did you know what you were doing, intend to do it, know that it might hurt someone, but not care a bit about the harm your action causes? (Recklessness)
  • So, the question must always be asked when a robot or AI system physically harms a person or property, or steals money or identity, or commits some other intolerable act: Was that act done intentionally? 
  • There is no identifiable person(s) who can be directly blamed for AI-caused harm.
  • There may be times where it is not possible to reduce AI crime to an individual due to AI autonomy, complexity, or limited explainability. Such a case could involve several individuals contributing to the development of an AI over a long period of time, such as with open-source software, where thousands of people can collaborate informally to create an AI.

The limitations on assigning responsibility thus add to the moral, legal, and technological case against fully autonomous weapons/ Robotics, and bolster the call for a ban on their development production, and use. Either way, society urgently needs to prevent or deter the crimes, or penalize the people who commit them.

There is no reason why an AI system’s killing of a human being or destroying people’s livelihoods should be blithely chalked up to “computer malfunction.

Because proving that these people had “intent” for the AI system to commit the crime would be difficult or impossible.

I’m no lawyer. What can work against AI crimes?

All human comments appreciate. All like clicks and abuse chucked in the bin.

Contact: bobdillon33@gmail.com

Share this:

  • Click to share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Mastodon (Opens in new window) Mastodon

All comments and contributions much appreciated

  • THE BEADY EYE SAYS TRUST IS DISAPPEARING THANKS TO OUR INABILITY TO RELATE TO EACH OTHER. December 19, 2025
  • THE BEADY EYE SAYS. THE WORLD NEEDS PEOPLE GOVERNMENT NOT MONEY GOVERNMENTS. December 18, 2025
  • THE BEADY EYE ASKS WHAT ARE WE THE SAME GOING TO DO TO STOP THE WORLD BEING FUCK UP FOR PROFIT BY RIPOFF MERCHANT. December 17, 2025
  • THE BEADY EYE CHRISTMAS GREETING. December 16, 2025
  • THE BEADY EYE SAYS. TO THE NEXT GENERATION TO LIVE A LIFE WORTH WHILE YOU MUST CREATE MEMORIES. December 16, 2025

Archives

  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013

Talk to me.

Jason Lawrence's avatarJason Lawrence on THE BEADY EYE ASK’S: WIT…
benmadigan's avatarbenmadigan on THE BEADY EYE ASK’S: WHA…
bobdillon33@gmail.com's avatarbobdillon33@gmail.co… on THE BEADY EYE SAYS: WELCOME TO…
Ernest Harben's avatarOG on THE BEADY EYE SAYS: WELCOME TO…
benmadigan's avatarbenmadigan on THE BEADY EYE SAY’S. ONC…

7/7

Moulin de Labarde 46300
Gourdon Lot France
0565416842
Before 6pm.

My Blog; THE BEADY EYE.

My Blog; THE BEADY EYE.
bobdillon33@gmail.com

bobdillon33@gmail.com

Free Thinker.

View Full Profile →

Follow bobdillon33blog on WordPress.com

Blog Stats

  • 94,154 hits

Blogs I Follow

  • unnecessary news from earth
  • The Invictus Soul
  • WordPress.com News
  • WestDeltaGirl's Blog
  • The PPJ Gazette
Follow bobdillon33blog on WordPress.com
Follow bobdillon33blog on WordPress.com

The Beady Eye.

The Beady Eye.
Follow bobdillon33blog on WordPress.com

Blog at WordPress.com.

unnecessary news from earth

WITH MIGO

The Invictus Soul

The only thing worse than being 'blind' is having a Sight but no Vision

WordPress.com News

The latest news on WordPress.com and the WordPress community.

WestDeltaGirl's Blog

Sharing vegetarian and vegan recipes and food ideas

The PPJ Gazette

PPJ Gazette copyright ©

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy
  • Subscribe Subscribed
    • bobdillon33blog
    • Join 223 other subscribers
    • Already have a WordPress.com account? Log in now.
    • bobdillon33blog
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar