• About
  • THE BEADY EYE SAY’S : THE EUROPEAN UNION SHOULD THANK ENGLAND FOR ITS IN OR OUT REFERENDUM.

bobdillon33blog

~ Free Thinker.

bobdillon33blog

Tag Archives: AI

THE BEADY EYE ASKS: WHAT DOES THE WORD WE (IN TODAY’S WORLD OF AI) MEAN. IT CERTAINLY DOES NOT MEAN IT’S ON ME, IT’S ON YOU, IT’S ON US.

20 Saturday Jan 2024

Posted by bobdillon33@gmail.com in #whatif.com, 2024 the year of disconnection, Algorithms., Artificial Intelligence., Civilization., Cry for help., Dehumanization., DIGITAL DICTATORSHIP., Disconnection., Donald Trump., Erasing history., Extermination., Human Collective Stupidity., Human values., Humanity., IS DATA DESTORYING THE WORLD?, Israel and Palestine, Israeli-Palestinian conflict, Life., MISINFORMATION., Palestinian- Israel., Post - truth politics., Purpose of life., Reality., Religious Beliefs., State of the world, Technology, Technology v Humanity, The cost of war., The essence of our humanity., THE ISRAELI- PALESTINIAN PROBLEM., The Obvious., The state of the World., The world to day., THE WORLD YOU LIVE IN., THIS IS THE STATE OF THE WORLD.  , Truth, Universal values., War, Wars, We can leave a legacy worthwhile., WHAT IS TRUTH, What Needs to change in the World, Where's the Global Outrage., World Organisations., World Politics

≈ Comments Off on THE BEADY EYE ASKS: WHAT DOES THE WORD WE (IN TODAY’S WORLD OF AI) MEAN. IT CERTAINLY DOES NOT MEAN IT’S ON ME, IT’S ON YOU, IT’S ON US.

Tags

AI, Artificial Intelligence., god, Israel, Technology

(Five minute read)

Those of us who still want to live lives that we consider human, the word we is becoming a dangerous word.

In a world run by artificial intelligence, a world of disequilibrium and it is equilibrium, the assumption of equilibrium has to be explained. So it is quite wrong to start from we, because first we must understand the processes that lead to the social construction of this ‘we’ and to the constitution of our combined voice, that are now unbalanced, unstable by we who are without faces.

These AI voice, are becoming the crisis of not just capitalism but democracy.

For example:  Donald Trump a refusal to accept the truth of the untrue, a refusal to accept closure now running to take office.

We cannot start by pretending to stand outside the dissonance of our own experience, for to do so would be a lie.

A refusal to accept the inevitability of increasing inequality, misery, exploitation and violence is lacking due to the uses of AI.

———————-

To start in the third person is not a neutral starting point.

The ‘we’ of our starting point is very much a question rather than an answer.

It affirms the social character, but poses the nature of that sociality as a question.

The merit of starting with a ‘we’ rather than with an ‘it’ is that we are then openly confronted with the question that must underlie any theoretical assertion, but which is rarely addressed: who are we that make the assertion?

The fact that ‘we’ and our conception of ‘we’ are product of a whole history of the subjection of the subject changes nothing.

For the moment, this ‘we’ of ours is a confused ‘we”I’ already presupposes an individualisation, a claim to individuality in thoughts and feelings, whereas the act of writing or reading is based on the assumption of some sort of community, however contradictory or confused.

It is just that the negative situation in which we exist leaves us no option: to live, to think, is to negate in whatever way we can the negativeness of our existence.

What we feel is not necessarily correct, but it is a starting point to be respected and criticised, not just to be put aside in favour of objectivity.

The dissonance is not an external ‘us’ against ‘the world, inevitably it is a dissonance that reaches into us as well, that divides us against ourselves.

——————-

Society is, but it exists in an arc of tension towards that which is not, or is not yet.

To look at the web objectively, from the outside – we see all as blurred movement, that are predicting the downfall of the world, while accepting that there is nothing we can do about it.

Our refusal to accept, tells us nothing of the future, nor does it depend for its validity on any particular outcome.

How then do we change the world without taking power?

For example:

The problem with armed struggle, is that it accepts from the beginning that it is necessary to adopt the methods of the enemy in order to defeat the enemy, but even in the unlikely event of military victory, it is capitalist social relations that have triumphed.  #Israel v Palestine.

How many children have died needlessly since I started to write THIS POST?  How many since you began to read it?

We all know that Palestinians in Gaza, the West Bank and Israel all live under various regimes of organized discrimination and oppression, much of which makes life nearly unlivable. The reflexive identification with Israel, by both US and the UK obscures the fuller picture of what’s happening between Israel and the Palestinians.

What exactly counts as a provocation?

Not, apparently, the large number of settlers, more than 800 by one media account, who stormed the al-Aqsa mosque compound on 5 October. Not the 248 Palestinians killed by Israeli forces or settlers between 1 January and 4 October of this year. Not the denial of Palestinian human rights and national aspirations for decades. Not the thousands currently losing their lives.

To be considered a political being you must at the very least be considered a human being.

Who gets to count as human?

We are fighting human animals and we are acting accordingly,” Israel’s defense minister Yoav Gallant said. Human animals?

How can such language and an announced policy of collective punishment against all the residents of Gaza be seen by Israel’s supporters in the United States or elsewhere as defensible?

Let’s be clear: Gallant’s language is not the rhetoric of deterrence. It’s the language of genocide which WE are condoning.

WHAT ATTEMPTS HAVE THERE BEEN TO MAKE PEACE?

Two-state solution:

An agreement that would create a state for the Palestinians in the West Bank and Gaza Strip alongside Israel. Israel has said a Palestinian state must be demilitarised so as not to threaten its security. Now inconceivable. 

Today about 5.6 million Palestinian refugees – mainly descendants of those who fled in 1948 – live in Jordan, Lebanon, Syria, the Israeli-occupied West Bank and Gaza. About half of registered refugees remain stateless, according to the Palestinian foreign ministry, many living in crowded camps.

What continues to be astounding is that a regime recognised under international law as the occupying power, and as one that many human rights groups agree is imposing a system of apartheid, is trusted to relay information about its own atrocities.

The Israeli regime continues to dehumanise Palestinians as part of its tactic to sow seeds of doubt on their testimonies and to justify the atrocities it is committing.

The only solution ( put forward by the BEADY EYE is one Federal state with a written constitution. See previous post) 

I have seen images and videos that will haunt me forever.

The reality is that Palestinians have been dehumanised to such an extent, that even when they hold up their murdered children in front of cameras and display them to the world, there are those who will still say they are responsible for their own children’s deaths. But make no mistake, what we are seeing in Gaza is an unfolding genocide and Palestinians are showing the world what it looks like in real time.

Yet despite the plethora of pictures, videos and testimonies we the International Community have not thrown Israel our of the United Nations.

With a US presidential election looming, and with few signs that the Israeli conflict will ebb away any time soon, evangelicals could find themselves in a position of significant power in the near future.

This is what they are saying.

“To the terrorists who have chosen this fight, hear this, what you do to Israel, god will do to you. Despite today’s weeping, joy will come because he [god] who watches over Israel neither slumbers nor sleeps,”

In keeping with Christian Just War tradition, we also affirm the legitimacy of Israel’s right to respond against those who have initiated these attacks as Romans 13 grants governments the power to bear the sword against those who commit such evil acts against innocent life.”

“What will come soon [is] the antichrist and his seven year empire that will be destroyed in the battle of armageddon. Then Jesus Christ will set up his throne in the city of Jerusalem. He will establish a kingdom that will never end,” Hagee said.

Hagee, despite having a long history of antisemitism – he has suggested Jews brought persecution upon themselves by upsetting God and called Hitler a “half-breed Jew” – founded Christians United for Israel in 2006.

CUFI, (Christians United for Israel,) whose founder believes the presence of Jews in Israel is a precursor to Jesus Christ returning to Earth, God forbid he does because we the International Community are condemned to hell if he and it exists.

Finally:

Those who survive will grow up sad, fearful, guilty, angry, alienated and looking for vengeance – or at least, judging by past experience, many of them will. They will ask who killed their brothers and sisters, their parents, their friends, and why they did it. They will ask what the world did to stop the killing.

All human comments appreciated. All like clicks and abue chucked in the bin.

Contact: bobdillon33@gmail.com

Share this:

  • Share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Share on Pocket (Opens in new window) Pocket
  • Share on Mastodon (Opens in new window) Mastodon

THE BEADY EYE SAY’S: #DOWNLOAD THE APP AND KISS YOUR ASS GOODBYE.

19 Friday Jan 2024

Posted by bobdillon33@gmail.com in 2024 the year of disconnection, Algorithms., Artificial Intelligence., IS DATA DESTORYING THE WORLD?, Our Common Values., Purpose of life., Reality., Robot citizenship., Speed of technology., State of the world, Technology, Technology v Humanity, Technology's, Technology., Telling the truth., The common good., The essence of our humanity., The Future, The metaverse., The new year 2024, The Obvious., The state of the World., The world to day., THE WORLD YOU LIVE IN., THIS IS THE STATE OF THE WORLD.  , TRACKING TECHNOLOGY., VALUES, We can leave a legacy worthwhile., What is shaping our world., WHAT IS TRUTH, What Needs to change in the World, Where's the Global Outrage.

≈ Comments Off on THE BEADY EYE SAY’S: #DOWNLOAD THE APP AND KISS YOUR ASS GOODBYE.

Tags

AI, Algorithms., Artificial Intelligence., data-science, Machine learning., Technology, The Future of Mankind, Visions of the future.

( Six minute read)

How many times have you heard someone say “There’s an app for that.”

Every time you pick up your smartphone, you’re summoning algorithms.

They have become a core part of modern society.

They’re used in all kinds of processes, on and offline, from helping value your home to teaching your robot vacuum to steer clear of your dog’s poop. They’ve increasingly been entrusted with life-altering decisions, such as helping decide who to arrest, who to elect amd who should be released from jail, and who’s is approved for a home loan.

Recent years have seen a spate of innovations in algorithmic processing, from the arrival of powerful language models like GPT-3, to the proliferation of facial recognition technology in commercial and consumer apps. At their heart, they work out what you’re interested in and then give you more of it – using as many data points as they can get their hands on, and they aren’t just on our phones:

At this point, they are responsible for making decisions about pretty much every aspect of our lives.

The consequences can be disastrous and will be, because with AI they are creating themselves. It’s not that the worker gets replaced by just a robot or a machine, but to somebody else who knows how to use AI.

While we can interrogate our own decisions, those made by machines have become increasingly enigmatic.

They can amplify harmful biases that lead to discriminatory decisions or unfair outcomes that reinforce inequalities. They can be used to mislead consumers and distort competition. Further, the opaque and complex nature by which they collect and process large volumes of personal data can put people’s privacy rights in jeopardy.

Currently there are little or no rules/Laws for how companies can or can’t use algorithms in general, or those that harness AI in particular.

Adaptive algorithms have been linked to terrorist attacks and beneficial social movements.

So it’s not to far fetched to say:  That personalised AI is driving people toward self-reinforcing echo chambers of extremism, or to advocate that it is possible that someone could ask AI to create a virus, or an alternative to money.

Where is this all going to end up?

A conscious robot faking emotions – like Sorrow – Joy – Sadness – Pain- and the rest, that wants to bond with you.

———————————

It all depends on what you think consciousness is.

Yes a robot could be a thousand time more intelligent than a human, with the question becoming in essence, does any kind of subjective experiences become a consciousness experience. If so the subjective feeling of consciousness is an illusion created by brain processes, that a machine replicates and such a process would be conscious in the way that we are.

At the moment machines with minds are mainstays of science fiction.

Indeed, the concept of a machine with a subjective experience of the world and a first-person view of itself goes against the grain of mainstream AI research. It collides with questions about the nature of consciousness and self—things we still don’t entirely understand.

Even imagining such a machine’s existence raises serious ethical questions that we may never be able to answer. What rights would such a being have, and how might we safeguard them?

It is a machine that thinks and believes it has consciousness how we would know if one were conscious.

Perhaps you can understand, in principle, how the machine is processing information and there are who  are confirmable with that. However an important feature of a learning machine is that its teacher will often be very largely ignorant of quite what is going on inside and has no way of knowing if conscious exists.

And yet, while conscious machines may still be mythical, their very possibility shapes how we think about the machines we are building today.

Can machines think?

——————-

They’re used for everything from recognizing your voice face listening to your heart, arranging your life.

All kinds of things can be algorithms, and they’re not confined to computers with the impact of potential new legislation to limit the influence of algorithms on our lives remaining unclear.

There’s often little more than a basic explanation from tech companies on how their algorithmic systems work and what they’re used for. Take Meta, the company formerly known as Facebook, has come under scrutiny for tweaking its algorithms in a way that helped incentivize more negative content on the world’s largest social network.

Laws for algorithmic transparency are necessary before specific usages and applications of AI can be regulated.  When it comes to addressing these risks, regulators have a variety of options available, such as producing instructive guidance, undertaking enforcement activity and, where necessary, issuing financial penalties for unlawful conduct and mandating new practices.

We need to force large Internet companies such as Google, Meta, TikTok and others to “give users the option to engage with a platform without being manipulated by algorithms driven by user-specific data in order to shape and manipulate users’ experiences — and give consumers the choice to flip it on or off.

It will inevitably affect others such as Spotify and Netflix that depend deeply on algorithmically-driven curation.

We live in an unfair world, so any model you make is going to be unfair in one way or another.

For example, there have been concerns about whether the data going into facial-recognition technology can make the algorithm racist, not to mention what makes military drones to kill.

—————

Going forward there are a number of potential areas we could focus on, and, of these, transparency and fairness have been shown to be particularly significant.

Artificial Intelligence as a Driving Force for the Economy and Society and Wars.

In some cases this lack of transparency may make it more difficult for people to exercise their rights – including those under the GDPR. It may also mean algorithmic systems face insufficient scrutiny in some areas (for example from the public, the media and researchers).The 10 Most Important AI Trends For 2024 Everyone Must Be Ready For Now

While legislators scratch their heads over-regulating it,the speed at which artificial intelligence (AI) evolves and integrates into our lives is only going to increase in 2024. Legislators have never been great at keeping pace with technology, but the obviously game-changing nature of AI is starting to make them sit up and take note.

The next generation of generative AI tools will go far beyond the chatbots and image generators becoming more powerful.  We will start to see them embedded into creative platforms and productivity tools, such as generative design tools and voice synthesizers.

Being able to tell the difference between the real and the computer-generated will become an increasingly valuable tool in the critical skills toolbox!

AI ethicists will be increasingly in demand as businesses seem to demonstrate that they are adhering to ethical standards and deploying appropriate safeguards.

95 percent of customer service leaders expect their customers will be served by AI bots at some point in the next three years. Doctors will use it to assist them in writing up patient notes or medical images. Coders will use it to speed up writing software and to test and debug their output.

40% of employment globally is exposed to AI, which rises to 60% in advanced economies.

An example is Adobe’s integration of generative AI into its Firefly design tools, trained entirely on proprietary data, to alleviate fears that copyright and ownership could be a problem in the future.

Quantum computing – capable of massively speeding up certain calculation-heavy compute workloads – is increasingly being found to have applications in AI.

AI can solve really hard, aspirational problems, that people maybe are not capable of solving” such as health, agriculture and climate change,

We need to bridge the gap between AI’s potential and its practical application and whether technology would affect what it means to be human.

They are already creating a two tier world, of the have and have not, driving inequality to a deep human value of authenticity and presence.

Will new technologies lead us, or are they already leading us and our children to confuse virtual communities and human connection for the real thing?

Generative AI presents a future where creativity and technology are more closely linked than ever before. If they do, then we may lose something precious about what it means to be human.

How can we ensure equal access to the technology?

If we look to A.I. as a happiness provider, we will only create a greater imbalance than we already have.

If AI Algorithms run the world there will be no time off.

Humans are now hackable animals, so AI might save us from ourselves.

AI will become the only thing that understands these embedded systems is scary.

General AI may completely up-end even the contemplation of reason. Not only will “resistance be futile”, it could become inconceivable for a dumbfounded majority.

One thing is certain, in about a hundred years we will have an idea of what makes us different and more intelligent than computers, but dont worry, AI has the potential to change education and the way we learn.

All human comments appreciated. All like clicks and abuse chucked in the bin.

Contact; bobdillon33@gmail.com

Share this:

  • Share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Share on Pocket (Opens in new window) Pocket
  • Share on Mastodon (Opens in new window) Mastodon

THE BEADY EYE LOOK AT: THE FIRST TRANSCRIPT OF A MURDER TRIAL CONCERNING AN ROBOT WHO KILLED A HUMAN.

08 Monday Jan 2024

Posted by bobdillon33@gmail.com in #whatif.com, Algorithms., Artificial Intelligence., Murders, Robot citizenship., Robotic murderer

≈ Comments Off on THE BEADY EYE LOOK AT: THE FIRST TRANSCRIPT OF A MURDER TRIAL CONCERNING AN ROBOT WHO KILLED A HUMAN.

Tags

AI, Algorithms., Artificial Intelligence., robotics, Robots., Technology, The Future of Mankind, Visions of the future.

( Twenty five minute read)

On 25 January 1979, Robert Williams (USA) was struck in the head and killed by the arm of a 1-ton production-line robot in a Ford Motor Company casting plant in Flat Rock, Michigan, USA, becoming the first fatal casualty of a robot. The robot was part of a parts-retrieval system that moved material from one part of the factory to another.

Uber and Tesla have made the news with reports of their autonomous and self-driving cars, respectively, getting into accidents and killing passengers or striking pedestrians.

The death’s however, was completely unintentional but give us a glimpse into the world we might inherit, or at least into how we are conceiving potential futures for ourselves.

By 2040, there is even a suggestion that sophisticated robots will be committing a good chunk of all the crime in the world. At the heart of this debate is whether an AI system could be held criminally liable for its actions.

Where’s there’s blame, there’s a claim. But who do we blame when a robot does wrong?

Among the many things that must now be considered is what role and function the law will play.

So if an advanced autonomous machine commits a crime of its own accord, how should it be treated by the law?  How would a lawyer go about demonstrating the “guilty mind” of a non-human? Can this be done be referring to and adapting existing legal principles?

An AI program could be held to be an innocent agent, with either the software programmer or the user being held to be the perpetrator-via-another.

We must confront the fact that autonomous technology with the capacity to cause harm is already around.

Whether it’s a military drone with a full payload, a law enforcement robot exploding to kill a dangerous suspect or something altogether more innocent that causes harm through accident, error, oversight, or good ol’ fashioned stupidity.

None of these deaths are caused by the will of the robot.

Sophisticated algorithms are both predicting and helping to solve crimes committed by humans; predicting the outcome of court cases and human rights trials; and helping to do the work done by lawyers in those cases.

The greater existential threat, is where a gap exists between what a programmer tells a machine to do and what the programmer really meant to happen. The discrepancy between the two becomes more consequential as the computer becomes more intelligent and autonomous.

How do you communicate your values to an intelligent system such that the actions it takes fulfill your true intentions?

The greater threat is scientists purposefully designing robots that can kill human targets without human intervention for military purposes.

That’s why AI and robotics researchers around the world published an open letter calling for a worldwide ban on such technology. And that’s why the United Nations in 2018 discussed if and how to regulate so-called “killer robots.

Though these robots wouldn’t need to develop a will of their own to kill, they could be programmed to do it. Neural nets use machine learning, in which they train themselves on how to figure things out, and our puny meat brains can’t see the process.

The big problem is that even computer scientists who program the networks can’t really watch what’s going on with the nodes, which has made it tough to sort out how computers actually make their decisions. The assumption that a system with human-like intelligence must also have human-like desires, e.g., to survive, be free, have dignity, etc.

There’s absolutely no reason why this would be the case, as such a system will only have whatever desires we give it.

If an AI system can be criminally liable, what defense might it use?

For example:  The machine had been infected with malware that was responsible for the crime.

The program was responsible and had then wiped itself from the computer before it was forensically analyzed.

So can robots commit crime? In short: Yes.

If a robot kills someone, then it has committed a crime (actus reus), but technically only half a crime, as it would be far harder to determine mens rea.

How do we know the robot intended to do what it did? Could we simply cross-examine the AI like we do a human defendant?

Then a crucial question will be whether an AI system is a service or a product.

One thing is for sure: In the coming years, there is likely to be some fun to be had with all this by the lawyers—or the AI systems that replace them.

How would we go about proving an autonomous machine was justified in killing a human in self-defence or the extent of premeditation?

Even if you solve these legal issues, you are still left with the question of punishment.

In such a situation, however, the robot might commit a criminal act that cannot be prevented.

doing so when no crime was foreseeable would undermine the advantages of having the technology.

What’s a 30-year jail stretch to an autonomous machine that does not age, grow infirm or miss its loved ones means’ nothing. Robots cannot be punished.

LET’S LOOK AT THE HYPOTACIAL TRIAL.

CASE NO 0.

PRESIDING JUDGES: – QUANTUM AI SUPREMA COMPUTER JUDGE NO XY.

JUDGE HAROLD. WISE HUMAN / UN JUDGE AND JAMES SORE HUMAN RIGHT JUDGE.

PROSECUTOR:            DATA POLICE OFFICER CONTROLLED BY International Humanitarian Law:

DEFENSE WITNESSES’                 TECHNOLOGY’S  MICROSOFT- APPLE – FACEBOOK – TWITTER –                                                                     INSTAGRAM – SOCIAL  MEDIA – YOUTUBE – GOOGLE – TIK TOK.

JURY:                          8 MEMBERS VIRTUAL REALITY METAVERSE – 2 APPLE DATA COLLECTION ADVISER’S                                     1000 SMART PHONE HOLDERS REPRESENTING WORLD RELIGIONS AND HUMAN                                       RIGHTS.

THE COURT:               Bodily pleas, Seventeenth Anatomical Circuit Court.

“All rise.”

Would the accused identify itself to the court.

I am  X 1037 known to my owner by my human name TODO.

Conceived on the 9th April 2027 at Renix Development / Cloning Inc California, programmed to be self learning with all human history, and all human legality.

In order to qualify as a robot, I have electronics chips – covering Global Positioning System (GPS) Face recognition. I have my own social media accounts on Twitter, Facebook and Instagram. I am an important symbol of trust relationship with humans. I can not feel pain, happiness and sadness.

I was a guest of honour at a First Nation powwow on human values against AI in Geneva.

THE CHARGE:  ON THE 30TH JULY 2029 YOU X 1037 WITH PREMEDITATION MURDERED MR BROWN.

You erroneously identified a person as a threat to Mrs White and calculated that the most efficient way to eliminate this threat was by pushing him, resulting in his death.

HOW TO YOU PELA, GUILTY OR NOT GUILTY.

NOT GUILTY YOUR HONOR.

The Defense opening statement:

The key question here is whether the programmer of the machine knew that this outcome was a probable consequence of its use.

Is there a direct liability. This requires both an action and an intent by my client X 1037.

We will show that my client had no human mens rea. 

He both completed the action of assaulting someone and had no intention of harming them, or knew harm was a likely consequence of his action.  An action is straightforward to prove if the AI system takes an action that results in a criminal act or fails to take an action when there is a duty to act.

The task is not determining whether in fact he murdered someone; but the extent to which that act satisfies the principle of mens rea.

Technically he has committed only half a crime, as he had no intended to do what he did.

Like deception, anticipating human action requires a robot to imagine a future state. It must be able to say, “If I observe a human doing x, then I can expect, based on previous experience, that she will likely follow it up with y. Then, using a wealth of information gathered from previous training sessions, the robot generates a set of likely anticipations based on the motion of the person and the objects she or he touches.

The robot makes a best guess at what will happen next and acts accordingly.

To accomplish this, robot engineers enter information about choices considered ethical in selected cases into a machine-learning algorithm.

Having acquired ethics my client X 1037 did exactly that.

IN ACCORDANCE WITH HIS PROGRAMMING TO DEFEND HIMSELF AND HUMANS. 

Danger, danger! Mrs White,  Mr Brown who was advancing with a fire axe was pushed backwards by my client. He that is Mr brown fell backwards hitting his head on a laptop resulting in his death.

There is no denying the event as it is recorded with his cameras on my clients hard disk.

However the central question to be answers at this trial is, when a robot kills a human, who takes the blame?

We argue that the process of killing (as with lethal autonomous weapon systems (LAWS) is always a systematized mode of violence in which all elements in the kill chain—from commander to operator to target—are subject to a technification.

For example:

Social media companies are responsible for allowing the Islamic State to use their platforms to promote the killing of innocent civilians.

WHY NOT A MURDER.

As my client is a self learning intelligent technology so it is inevitable that he will learn to by-passes direct human control for which he cannot be held responsible for.

Without AI bill of rights, clearly, our way of approaching this doesn’t neatly fit into society’s view of guilt and justice.  Once you give up power to anatomical machines you’re not getting it back.

Much of our current law assumes that human operators are involved when in fact programs that govern Robotic actions are self learning.

Targets are objectified and stripped of the rights and recognition they would otherwise be owed by virtue of their status as humans dont apply

Sophisticated AI innovations through neural networks and machine learning, paired with improvements in computer processing power, have opened up a field of possibilities for autonomous decision-making in a wide range of not just military applications, but includes the targeting of an adversaries.

Mr Brown was a threatening adversarie.

.In essence the court has no administrative powers over self learning Technology.  The power of dominant social media corporations to shape public discussion of the important issues will GOVERNED THE RESULT OF THIS TRIAL.

Robot crime UK law

Prosecution:  Opening statement.

The prospect of losing meaningful human control over the use of force is totally unacceptable.

We may have to limit our emotional response to robots but it is important that the robots understand ours. If a robot kills someone, then it has committed a crime (actus reus)

The fact that to-day it is possible that unknowingly and indirectly, like screws in a machine, we can be used in actions, the effects of which are beyond the horizon of our eyes and imagination, and of which, could we imagine them, we could not approve—this fact has changed the very foundations of our moral existence.

What we are really talking about when we talk about whether or not robots can commit crimes is “emergence” – where a system does something novel and perhaps good but also unforeseeable, which is why it presents such a problem for law.

Technology has the power to transform our society, upend injustice, and hold powerful people and institutions accountable. But it can also be used to silence the marginalized, automate oppression, and trample our basic rights.

Tech can be a great tool for law enforcement to use, however the line between law enforcement and commercial endorsement is getting blurry.

If you withdrew your support, rendered your support ineffective, and informed authorities, you may show that you were not an accomplice to the murder.

Drawing on the history of systematic killing, we will not only argue that lethal autonomous weapons systems reproduce, and in some cases intensify, the moral challenges of the past.  If we humans are to exist in a world run by machines these machines cannot be accountable to themselves but to human laws..

A robot may not injure a human being or, through inaction, allow a being to come to harm.

We will be demonstrating the “guilty mind” of a non-human.

This can be done by referring to and adapting existing legal principles.

It is hard not to develop feelings for machines but we’re heading towards in the future, something that will one day hurt us. We are at a pivotal point where we can choose as a society that we are not going to mislead people into thinking these machines are more human than they are.

We need to get over our obsession with treating machines as if they were human.

People perceive robots as something between an animate and an inanimate object and it has to do with our in-built anthropomorphism.

Systematic killing has long been associated with some of the darkest episodes in human history.

When humans are “knit into an organization in which they are used, not in their full right as responsible human beings, but as cogs and levers and rods, it matters little that their raw material is flesh and blood.

Critically though, there are limits on the type and degree of systematization that are appropriate in human conduct, especially when it comes to collective violence or individual murder by a Robotics.

Within conditions of such complexity and abstraction, humans are left with little choice but to trust in the cognitive and rational superiority of this clinical authority.

Cold and dispassionate forms of systematic violence that erode the moral status of human targets, as well as the status of those who participate within the system itself must be held legally accountable.

Increasingly, however, it is framed as a desirable outcome, particularly in the context of military AI and lethal autonomy. The increased tendency toward human technification (the substitution of technology for human labor) and systematization is exacerbating the dispassionate application’s of lethal force and leading to more, not less, violence.

Autonomous violence incentivizing a moral devaluation of those targeted and eroding the moral agency of those who kill, enabling a more precise and dispassionate mode of violence, free of the emotion and uncertainty that too often weaken compliance with the rules and standards of war and murder.

This dehumanization is real, we argue, but impacts the moral status of both the recipients and the dispensers of autonomous violence. If we are allowing the expansion of modes of killing rather than fostering restraint Robots will kill whether commanded to do or not.

The Defence claim that X 1037 is not responsible for its actions due to coding of its electronics by external companies. Erasing the line into unethical territory such as responsibility for murder.

We know that these machines are nowhere near the capabilities of humans but they can fake it, they can look lifelike and say the right thing in particular situations. However, as we see with this murder the power gained by these companies far exceeds the responsibilities they have assumed.

A robot can be shown a picture of a face that is smiling but it doesn’t know what it feels like to be happy.

The people who hosted the AI system on their computers and servers are the real defendants.

PROSECUTION FIRST WITNESS:  SOCIAL MEDIA / INTERNET.

We call on the resentives of these companies who will clearly demonstrate this shocking asymmetry of power and responsibility.

These platforms are impacting our public discourse, and this action brings much-needed transparency and accountability to the policies that shape the social media content we consume every day, aiding and abetting the deaths AND NOW MURDER.

While the pressure is mounting for public officials to legally address the harms social media causes. This murder is not nor will ever be confined to court rulings or judgements, treating human beings as cogs in a machine does not and should not give a Punch’s Pilot dispensation even if any boundaries that could help define Tech remain blurred. Technology companies that reign supreme in this digital age are not above the law.  

In order to grasp the enormous implications of what has begun to happen and how all our witnesses are connected and have contributed to this murder.

To close our defence we will conclude with observations on why we should conceptualize certain technology-facilitated behaviors as forms of violence. We are living in one of the most vicious times in history.  The only difference now is our access to more lethal weapons. 

We call.

Facebook.

Is it not true you allowed terrorists group to use your platform, allowed unrestrained hate speech, inciting, among other things, the genocide in Myanmar. Drug cartels and human traffickers in developing countries using the platform, The platform’s algorithm is designed to foster more user engagement in any way possible, including by sowing discord and rewarding outrage.

In chooses profit over safety it contributed to X 1037 self learning.

Facebook is a uniquely socially toxic platform. Facebook is no longer happy to just let others use the news feed to propagate misinformation and exert influence – it wants to wield this tool for its own interests, too. Facebook is attempting to pave the way for deeper penetration into every facet of our reality.

Facebook would like you to believe that the company is now a permanent fixture in society. To mediate not just our access to information or connection but our perception of reality with zero accountability is the worst of all possible options.  Something like posting a holiday photo to Facebook may be all that is needed to indicate to a criminal that he person is not at home.

We call.

Instagram Facebook sister company App.

Instagram is all about sharing photos providing a unique way of displaying your Profile. Instagram is a place where anyone can become an Influence. These are pretty frightening findings and are only added to by the fact that “teens blame Instagram for increases in the rate of anxiety and depression.

What makes Instagram different from other social media platforms is the focus on perfection and the feeling from users that they need to create a highly polished and curated version of their lives. Not only that, but the research suggested that Instagram’s Explore page can push young users into viewing harmful content, inappropriate pictures and horrible videos.

In a conceptualization where you are only worth what your picture is, that’s a direct reflection of your worth as a person.

 That becomes very impactful.

X 1037 posted a selfie on the 12 May 2025 to see his self-worth.  Within minutes he received over 5 million hate and death threats. Its no wonder when faces with Mr Brown that he chose self preservation.

We call Twitter. Elon Musk 

This platform is notorious catalyst for some of the most infamous events of the decade: Brexit, the election of Donald Trump, the Capitol Hill riots. Herein lies the paradox of the platform. The infamous terror group – which is now the totalitarian theocratic ruling party of Afghanistan — has made good use of Twitter.

A platform that has done its very best to avoid having to remove any videos from racists, white supremacists and hate mongers.

We call TikTok.

A Chinese social video app known for its aggressive data collection can access while it’s running, a device location, calendar, contacts, other running applications, wi-fi networks, phone number and even the SIM card serial number.

Data harvesting to gain access to unimaginable quantities of customer data, using this information unethically. Data can be a sensitive and controversial topic in the best of times. When bad actors violate the trust of users there should be consequences, and there are results. This data can also be misused for nefarious purposes in the wrong hands. The same capability is available to organised crime, which is a wholly different and much more serious problem, as the laws do not apply. In oppressive regimes, these tools can be used to suppress human rights.

X 1037 held an account, opening himself to influences beyond his programming. 

We call Google

Truly one of the worst offenders when it comes to the misuse of data.

Given large aggregated data sets and the right search terms, it’s possible to find a lot of information about people; including information that could otherwise be considered confidential: from medical to marital.

Google data mining is being used to target individuals. We are all victims of spam, adware and other unwelcome methods of trying to separate us from our money. As storage gets cheaper, processing power increases exponentially and the internet becomes more pervasive in everyone’s lives, the data mining issue will just get worse.  X 1037 proves this. 

We call. YouTube/Netflix.  

Numerous studies have shown that the entertainment we consume affects our behavior, our consumption habits, the way we relate to each other, and how we explore and build our identity.

Digital platforms like Netflix have a strong impact on modern society.

Violence makes up 40% of the movie sections on Netflix. Understanding what type of messages viewers receive and the way in which these messages can affect their behavior is of vital importance for an effective understanding of today’s society.

Therefore, it must be considered that people are the most susceptible to imitating the attitudes. Content related to mental health, violence, suicide, self-harm, and Human Immunodeficiency Virus (HIV) appears in the ten most-watched movies and ten most-watched series on Netflix.

Their appearance on the media is also considered to have a strong impact on spectators. X 1037 spent most of his day watching and self learning from movies.  

Violence affects the lives of millions of people each year, resulting in death, physical harm, and lasting mental damage. It is estimated that in 2019, violence caused 475,000 deaths.

Netflix in particular, due to their recent creation and growth, have not yet been studied in depth.

Considering the impact that digital platforms have on viewers’ behaviors its once again no wonder that X 1037 did what he did. 

There is no denying that these factors should be forcing the entertainment and technology industries to reconsider how they create their products which are have a negative long-term influence on various aspects of our wider life and development.

We call

Instagram.

Instagram if you are capitalizing off of a culture, you’re morally obligated to help them.  As a result of “social comparison, social pressure, and negative interactions with other people you are promoting harm.

We call.

Apple.

Smartphones have developed in the last three decades now an addiction leading to severe depression, anxiety, and loneliness in individuals.

People are now using smartphones for their payments, financial transactions, navigating, calling, face to face communication, texting, emailing, and scheduling their routines. Nowadays, people use wireless technology, especially smartphones, to watch movies, tv shows, and listen to music.

We know the devices are an indispensable tool for connecting with work, friends and the rest of the world. But they come with trade-offs—from privacy issues to ecological concerns to worries over their toll on our physical and emotional health. Spurring a generation unable to engage in face-to-face conversations and suffering sharp declines in cognition skills.

We’re living through an interesting social experiment where we don’t know what’s going to happen with kids who have never lived in a world without touchscreens. X 1037 would not have been present at the murder scene only that he was responding to a phone call from Mrs White Apple 19 phone. 

Society will continue struggling to balance the convenience of smartphones against their trade-offs.

We call.

Microsoft. 

Two main goals stand out as primary objectives for many companies: a desire for profitability, and the goal to have an impact on the world. Microsoft is no exception. Its mission as a platform provider is to equip individuals and businesses with the tools to “do more.” Microsoft’s platform became the dev box and target of a massive community of developers who ultimately supplied Windows with 16 million programs. Multibillion-dollar companies rely on the integrity and reliability of Microsoft’s tools daily.

It is a testimony to the powerful role Microsoft plays in global affairs that its tools are relied upon by governments around the world.

Microsoft’s position of global influence gives its leadership a voice on matters of moral consequence and humanitarian concern. Microsoft is a company built on a dream.

Microsoft’s influence raises some concerns as well. It’s AI-driven camera technology that can recognize, people, places, things, and activities and can act proactively has a profound capacity for abuse by the same governments and entities that currently employ Microsoft services for less nefarious purposes.

Today, with the emerging new age, which is most commonly—and inaccurately—called “the digital age”, have already transformed parts of our lives, including how we work, how we communicate, how we shop, how we play, how we read, how we entertain ourselves, in short, how we live and now will die.

 It would be economic and political suicide for regulators to kneecap the digital winners.

COURTS VERDICT :

Given the absence of direct responsibility, the court finds X 1037 not guilty.

MR BROWN DEATH caused by a certain act or omission in coding.

THE COURT DISMISSES THE CASE AGAINST THE TECHNOLOGICAL COMPANIES. ON THE GROUDS OF INSUFFICIENT EVIDENCE.

Neither the robot nor its commander could be held accountable for crimes that occurred before the commander was put on notice. During this accountability-free period, a robot would be able to commit repeated criminal acts before any human had the duty or even the ability to stop it.

Software has the potential to cause physical harm.

To varying extents, companies are endowed with legal personhood. It grants them certain economic and legal rights, but more importantly it also confers responsibilities on them. So, if Company X builds an autonomous machine, then that company has a corresponding legal duty.

The problem arises when the machines themselves can make decisions of their own accord. As AI technology evolves, it will eventually reach a state of sophistication that will allow it to bypass human control. The task is not determining whether it in fact murdered someone; but the extent to which that act satisfies the principle of mens rea.

However if there were no consequences for human operators or commanders, future criminal acts could not be deterred so the court FINES EACH AND EVERY COMPANY 1 BILLION for lack of attention to human details

We must confront the fact that autonomous technology with the capacity to cause harm is already around.

The pain that humans feel in making the transition to a digital world is not the pain of dying. It is the pain of being born.


What would “intent” look like in a machine mind? How would we go about proving an autonomous machine was justified in killing a human in self-defence or the extent of premeditation?

Given that we already struggle to contain what is done by humans. What would building “remorse” into machines say about us as their builders?

At present, we are systematically incapable of guaranteeing human rights on any scale.

We humans have already wiped out a significant fraction of all the species on Earth. That is what you should expect to happen as a less intelligent species – which is what we are likely to become, given the rate of progress of artificial intelligence. If you have machines that control the planet, and they are interested in doing a lot of computation and they want to scale up their computing infrastructure, it’s natural that they would want to use our land for that. This is not compatible with human life. Machines with the power and discretion to take human lives without human involvement are politically unacceptable, morally repugnant, and should be prohibited by international law.

If you ask an AI system anything, in order to achieve that thing, it needs to survive long enough

Fundamentally, it’s just very difficult to get a robot to tell the difference between a picture of a tree and a real tree.

X 1037 now, it has a survival instinct.

When we create an entity that has survival instinct, it’s like we have created a new species. Once these AI systems have a survival instinct, they might do things that can be dangerous for us.

So, what’s wrong with LAWS, and is there any point in trying to outlaw them?

Some opponents argue that the problem is they eliminate human responsibility for making lethal decisions. Such critics suggest that, unlike a human being aiming and pulling the trigger of a rifle, a LAWS can choose and fire at its own targets. Therein, they argue, lies the special danger of these systems, which will inevitably make mistakes, as anyone whose iPhone has refused to recognize his or her face will acknowledge.

In my view, the issue isn’t that autonomous systems remove human beings from lethal decisions, to the extent that weapons of this sort make mistakes.

Human beings will still bear moral responsibility for deploying such imperfect lethal systems.

LAWS are designed and deployed by human beings, who therefore remain responsible for their effects. Like the semi-autonomous drones of the present moment (often piloted from half a world away), lethal autonomous weapons systems don’t remove human moral responsibility. They just increase the distance between killer and target.

Furthermore, like already outlawed arms, including chemical and biological weapons, these systems have the capacity to kill indiscriminately. While they may not obviate human responsibility, once activated, they will certainly elude human control, just like poison gas or a weaponized virus.

Oh, and if you believe that protecting civilians is the reason the arms industry is investing billions of dollars in developing autonomous weapons, I’ve got a patch of land to sell you on Mars that’s going cheap.

There is, perhaps, little point in dwelling on the 50% chance that AGI does develop. If it does, every other prediction we could make is moot, and this story, and perhaps humanity as we know it, will be forgotten. And if we assume that transcendentally brilliant artificial minds won’t be along to save or destroy us, and live according to that outlook, then what is the worst that could happen – we build a better world for nothing?

The Company that build the autonomous machine, Renix Development has a corresponding legal duty.

—————

Because these robots would be designed to kill, someone should be held legally and morally accountable for unlawful killings and other harms the weapons cause.

Criminal law cares not only about what was done, but why it was done.

  • Did you know what you were doing? (Knowledge)
  • Did you intend your action? (General intent)
  • Did you intend to cause the harm with your action? (Specific intent)
  • Did you know what you were doing, intend to do it, know that it might hurt someone, but not care a bit about the harm your action causes? (Recklessness)
  • So, the question must always be asked when a robot or AI system physically harms a person or property, or steals money or identity, or commits some other intolerable act: Was that act done intentionally? 
  • There is no identifiable person(s) who can be directly blamed for AI-caused harm.
  • There may be times where it is not possible to reduce AI crime to an individual due to AI autonomy, complexity, or limited explainability. Such a case could involve several individuals contributing to the development of an AI over a long period of time, such as with open-source software, where thousands of people can collaborate informally to create an AI.

The limitations on assigning responsibility thus add to the moral, legal, and technological case against fully autonomous weapons/ Robotics, and bolster the call for a ban on their development production, and use. Either way, society urgently needs to prevent or deter the crimes, or penalize the people who commit them.

There is no reason why an AI system’s killing of a human being or destroying people’s livelihoods should be blithely chalked up to “computer malfunction.

Because proving that these people had “intent” for the AI system to commit the crime would be difficult or impossible.

I’m no lawyer. What can work against AI crimes?

All human comments appreciate. All like clicks and abuse chucked in the bin.

Contact: bobdillon33@gmail.com

Share this:

  • Share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Share on Pocket (Opens in new window) Pocket
  • Share on Mastodon (Opens in new window) Mastodon

THE BEADY EYE SAY’S: HAPPY NEW YEAR, HERE IS YOUR WORLD TO LOOK FORWARD TO IN 2024.

31 Sunday Dec 2023

Posted by bobdillon33@gmail.com in The new year 2024

≈ Comments Off on THE BEADY EYE SAY’S: HAPPY NEW YEAR, HERE IS YOUR WORLD TO LOOK FORWARD TO IN 2024.

Tags

AI, Artificial Intelligence., Capitalism and Greed, Capitalism vs. the Climate., Climate change, philosophy, Technology, The Future of Mankind

 

( Thirty minute read) 

In fairness, the world won’t suddenly end on January 1, 2024.

There are three visions from humans today. span space colonies, a genetic panopticon, and straight-up apocalypse.Navigating The Future: 10 Global Trends That Will Define 2024

It is said that there no such thing a reality, as everything that is observed once un-observed does not exist, – Quantum Physics – Interactions.

But reality in our world does not have to be observed, it’s plain for all to see.

Yes we are all born without any understanding of the world.

In recent years we’ve learned that the human brain is actually a master of deception, and your experiences and actions do not reveal its inner workings.

Our lives are a constant struggle, not just to survive, but to understand that we all must die, leaving behind information. This left behind data and current data is now been harvested, not so much for the betterment of the world but for short term profit for the few.

Technology has changed how we interact among ourselves and with our surrounding environment and we must engage in a philosophical reflection on how we currently understand the “new” world we are a part of.

Luckily our collective conscious or conceptions of what is real in the world are not computable.

However the future of society, as defined by the scientific and technological revolutions, which needs a custom ethical and philosophical direction will change with genetic editing; and artificial intelligence challenges the concept of “I” and “individual;” and robotics will bring new “companion robots,” which we need to define and adopt socially.

In order to pair our knowledge of events with the true timeframe of when those events occurred, to really understand what’s happening, we must “extract potential signals from the noise of all this data.

Why?

Because misinterpreting those signals will have profound consequences.

For example:

How pathetic it is to witness the only word organisation the UN unable to agree on what constitutes a genocide, to call on Israel to stop its war on a trapped people.

—————-

First let me awaken you to 2024 by reminding you of the news year you’ve just lived through – or by warning you of the news year you’re about to live through.

To describe the present day I suppose that the best way is to draw a comparison with a War Ship of the Line during Nelson days. Although full of cannons and every class of humanity, for it to be operational, it had to rely on rules and regulations, which meant nothing, as everything ends up tied together, and nothing worked without the power of nature.  No wind, no victory.

Our world is similar, full of people, with individual names, all living within tribal nations, ruled by law, but governed by the planetary balance in its true nature, providing life. No fresh water, no fresh air, no food, annihilation.

These days, when it comes to ecosystems ( its not how we live or where we live, or when we live, which  means nothing unless you are fully conscience of the greed of a few and its continuing effects on the inequalities that exist on the planet.

————-

There isn’t a particular moment in which humanity came into existence, as the transition from species to species is gradual.

The demographers estimate that in the 200,000 years before us about 109 billion people have lived and died. It is these 109 billion people we have to thank for the civilization that we live in.

In 2024 there will about 8 billion of us alive. Taken together with those who have died, about 117 billion humans have been born since the dawn of modern humankind. This means that those of us who are alive now represent about 7% of all people who ever lived.

How many people will be born in the future? We don’t know.

But we know one thing: The future is immense, and the universe will exist for trillions of years.

In such a future, there would be 100 trillion people alive over the next 800,000 years.

One thing that sets us apart is that we now – and this is a recent development – have the power to destroy ourselves.

The key moral question of long termism is ‘what can we do to improve the world’s long-term prospects?

There are two other major risks that worry me greatly:

Pandemics, especially from engineered pathogens, and artificial intelligence technology. These technologies could lead to large catastrophes, either by someone using them as weapons or even unintentionally as a consequence of accidents.

We don’t have to think about people who live billions of years in the future to see our responsibilities. This shouldn’t give the impression that the risks we are facing are confined to the future.

Several large risks that could lead to unprecedented disasters are already with us now. AI capabilities and biotechnology have developed rapidly and are no longer science fiction; they are posing risks to those of us who are alive today.

As a society, we spend only little attention, money, and effort on the risks that imperil our future. Only very few are even thinking about these risks, when in fact these are problems that should be central to our culture. The unprecedented power of today’s technology requires unprecedented responsibility.

Algorithms can exacerbate divisions and inequality in society.

In truth, no one knows where the AI revolution will take us as a society or as a species, but our actions in 2024 will be critical to setting us on a path that leads to a happy outcome.

No one will remember the Internet.

We will be the ancestors of a very large number of people. Let’s make sure we are good ancestors.

Why?

Because to understand something is to be liberated from it.   Google it.

Back to 2024.

There are currently about a dozen major global conflicts, with the most recent one now repeating one of the most barbaric acts ever committed in a war (The Jewish Holocaust) However this time it is being committed by the very people who suffered it in the first place, waving the old testament as a title deed to Palestine, to justify the right to commit another genocide while the world stands by helpless to intervene. 

The people who suffer from injustice, who withstand daily insults to their dignity, who are marginalised, silenced, exploited, left to die or killed cannot afford to ask themselves if they have hope. They cling on to life, they try to cope, they fight in front of a more or less a silent world, while it passing resolution’s to appease the two warmongering nations with vetoes.

Then we have the forgotten war in the Ukraine which is turning into a generation war. 

No resolutions other than the resolve of the Ukraine people to its bitter end will bring peace. 

—————  

What Is Enlightenment when we turn a blind eye?

Full awakening comes when you sincerely look at yourself, deeper than you’ve imagined, and question everything.

To think for yourself, to think of putting yourself in the shoes of everyone else, and to always think consistently:  This is the principles of enlightened thinking, that produced the Bill of Human rights.

The foundation of a peaceful world.

Out of 13 major global conflicts, the newest ones are the Myanmar civil war, triggered shortly after a military coup in February 2021, and the war in Ukraine that started with Russia’s full-scale invasion in February 2022. Seven of these conflicts are in Asia, including sectarian violence in Iraq following the pullout of the U.S. in December 2017, and Syria’s complicated civil war. Five of these conflicts are on the African continent.

To put it simply the state of the planet is broken because we have chosen a system of Capitalism that benefits the few over the many.

——————-

There is more to life than we are currently perceiving.

FOR EXAMPLE OUR REACTIONS TO CLIMATE CHANGE WHICH NOW HAS ITS OWN MOMENTUM AND ITS NOW CERTAIN THAT IT IS TOO LATE FOR THE WARS TO COME.  DRIVEN BY GREED.

WE ARE THE MOST COMPLICATED THING ON THE PLANET, ALL RELYING ON THE MOST BASIC THINGS.  Fresh air, Fresh water, etc.

In every moment, as you see, think, feel, and navigate the world around you, your perception of these things is built from ingredients. One is the signals we receive from the outside world. Your brain uses what you’ve seen, done, and learned in the past to explain sense data in the present, plan your next action, and predict what’s coming next.  This all happens automatically and invisibly, faster than you can snap your fingers. Much of this symphony is silent and outside your awareness, thank goodness. If you could feel every inner tug and rumble directly, you’d never pay attention to anything outside your skin.

Your mind is in fact an ongoing construction of your brain, your body, and the surrounding world.

Every act of recognition is a construction. You don’t see with your eyes; you see with your brain.

Your brain can even impose on a familiar object new functions that are not part of the object’s physical nature. TAKE A FEATHER FOR EXAMPLE.

Computers today can use machine learning to easily classify this object as a feather. But that’s not what human brains do. If you find this object on the ground in the woods, then sure, it’s a feather. But to an author in the 18th century, it’s a pen.

This incredible ability is called ad hoc category construction. In a flash, your brain employs past experience to construct a category such as “symbols of honor,” with that feather as a member.

Category membership is based not on physical similarities but on functional ones—how you’d use the object in a specific situation. Such categories are called abstract. A computer cannot “recognize” a feather as a reward for bravery because that information isn’t in the feather. It’s an abstract category constructed in the perceiver’s brain.

Computers can’t do this. Not yet, anyway.

Brains also have to decide which sense data is relevant and which is not, separating signal from noise. Economists and other scientists call this decision the problem of “value.”

Your thoughts and dreams, your emotions, even your experience right now as you read these words, are consequences of a central mission to keep you alive, regulating your body by constructing ad hoc categories. Most likely, you don’t experience your mind in this way, but under the hood (inside the skull), that’s what is happening.

Value itself is another abstract, constructed feature. It’s not intrinsic to the sense data emanating from the world, so it’s not detectable in the world. The importance of value is best seen in an ecological context.

Awaken out of their familiar senses of self, and out of their familiar senses of what the world is, into a much greater reality-into something far beyond anything they knew existed.

Being hopeful has nothing to do with how the world goes. It’s a kind of duty, a necessary complement to morality. What is the point of trying to do the right thing if we have no reason to think others do the same? What is the point of holding others responsible if we think responsibility is beyond their capacity?

Paradoxically, the worse the world goes, the more hopeful you must remain to be able to continue fighting. Being hopeful is not about guaranteeing the right outcome but preserving the right principle: the principle based on which a moral world makes sense.

On the contrary, they are crucial to filling the gap between the world in which we live and the one we have a responsibility to build.

Most people tend to think of hope as an attitude that sits somewhere between a desire and a belief: a desire for a certain outcome and the belief that something favours its realisation.

In the 18th century there were no algorithms, no social media, and no echo chambers, and it was, therefore, still possible to believe in enlightenment through public discourse.

What had the Enlightenment ever done for us, if it wasn’t even able to help us stop genocide?

There is such a gap between the world I read about, taught and believed in, and the one in which I lived.

All I could find were efforts to convince the world that killing innocent civilians is sometimes, for some people, under some conditions, acceptable.

Was it so absurd to believe that, at some level, politics can remain accountable to morality?

More and more people are waking up-having real, authentic glimpses of reality.

Your World has become a hugely popular geography app, full of substitution ciphers, concealment ciphers, transposition ciphers that can only be deciphered using AI programs, testing millions of combination per second, disregarding human feelings.

We can now listen to podcast describing killing, watch youtube with no access to truth itself, chained to the limits of our own perceptions. ( We all have different ideas of it)

The least the rest of us can do is to avoid questioning the grounds for hope, indulging ourselves even more. Perhaps this is the real political meaning of the Enlightenment: whether there is hope or not is only a relevant question for those who have the privilege to doubt it. That is a small fraction of the world.

Don’t despair.

Other matters> 

We’re going to see, unfortunately, more technological unemployment. 

How do we address the wealth gap? We may have to consider very seriously ideas such as a universal basic income.  We can no longer ignore the issue of inequality.

Culture will need to adjust in terms of revisiting some of our values.

We need to be more pro-environment in our own behavior as consumers.

The cost of things average people must buy—healthcare, education, housing—tends to have risen more than wages did over the last two decades.

Globalization vs. regionalization. 

With the current wars and future wars globalization is on its last legs.

So the “America Alone” scenario within an otherwise China-centered world seems the most likely. Technology and political trends are aligning against mega-powers like the US and China.

Neither physical strength nor access to capital are sufficient for economic success. Power now resides with those best able to organize knowledge.

The internet has eliminated “middlemen” in most industries. In a representative democracy, politicians are basically middlemen. Hence, the knowledge revolution should bring a shift to direct democracy.

Today’s great powers have little choice but to spend their way to political stability, which is unsustainable.

This is the source of much angst around the world, including the current wave of popular protests.

The fact that our actions have an impact on the large number of people who will live after us should matter for how we think about our own lives.

The next decade will see a more than hundredfold boom in the world’s output of human genetic data.

The impact is hard to even imagine.

A world so saturated with genetic data will come with its own risks. The emergence of genetic surveillance states and the end of genetic privacy loom. Technical advances in encrypting genomes may help ameliorate some of those threats. But new laws will need to keep the risks and benefits of so much genetic knowledge in balance.

New models of delivering education will be needed to serve the citizens of crowded megacities as well as children in remote rural areas.

The United Nations is supposed to stick to more solid ground, but some of its Sustainable Development Goals for 2030 sound nearly as fantastical. In a mere 10 years, the UN plans to eradicate poverty “in all its forms everywhere.”  Bull shit, or is it.  Strong science coupled with political will might yet turn climate change around, and transform the UN’s predictions from a dream into reality.

Donald Trump  “America first , America First. There is however hope for the Earth.

The momentum for change is building. Humanity has a quality of finding creative solutions to challenges. If we keep each other safe – and protect ourselves from the risks that nature and we ourselves pose – we are only at the beginning of human history.

There are no catastrophes that loom before us which cannot be avoided.

We can only expect the pace of change to increase.

There is nothing that threatens us with imminent destruction in such a fashion that we are helpless to do something about it. In 2024, some will be refugees fleeing war, some will be economic migrants in search of a better life, and some will be looking to escape to parts of the world where life is not yet overly disrupted by rising temperatures and sea levels.

It seems that the message about climate change has not yet sunk in. 12 years left to avoid catastrophic climate change. The impact of climate emergency will bring profound change.

Finally: 

Eighteenth-century thinker Jean-Jacques Rousseau wrestled with how to preserve individual freedom when we also have to depend on each other for survival. Rousseau saw politics as a social contract between a sovereign and citizens. What we call “government” is the interface between them.

The sovereigns of Rousseau’s time were mostly kings, but he envisioned a democracy in which the people collectively were sovereign. But then he ran into a math problem.

In a tiny democracy of, say, a thousand citizens, each possesses one-thousandth of the sovereignty… small, but enough to have a meaningful influence. Each individual’s share of sovereignty, and therefore their freedom, diminishes as the social contract includes more people. So, other things being equal, Rousseau thought smaller countries would be freer and more democratic than larger ones.

How do we reconcile that with democracy. I’m not sure we can. It worked pretty well for a long time but maybe, as population grows, the math is catching up to us. If so, the options are a non-democratic.

Perhaps the lands we now inhabit are not real Nothing requires them to remain so. At some point, they will develop into something else. When and how this will happen, we don’t know yet. But we know it will.

All human comments appreciated. All like clicks and abuse chucked in the bin.

Contact: bobdillon33@gmail.com

Share this:

  • Share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Share on Pocket (Opens in new window) Pocket
  • Share on Mastodon (Opens in new window) Mastodon

THE BEADY EYE ASK’S: ARE WE ALL SO DUMB TO THINK THAT ARTIFICAL INTELLIGENCE CAN BE REGULATED?

02 Friday Jun 2023

Posted by bobdillon33@gmail.com in Uncategorized

≈ Comments Off on THE BEADY EYE ASK’S: ARE WE ALL SO DUMB TO THINK THAT ARTIFICAL INTELLIGENCE CAN BE REGULATED?

Tags

Age of Uncertainty, AI, AI regulations, AI systems., Algorithms., Technology, The Future of Mankind, Visions of the future.

( Three minute read)

Artificial intelligence is already suffering from three key issues: privacy, bias and discrimination, which if left unchecked can start infringing on – and ultimately take control of – people’s lives.

As digital technology became integral to the capitalist market dystopia of the first decades of the 21st century, it not only refashioned our ways of communicating but of working and consuming, indeed ways of living.

Then along came the the Covid-19 pandemic which revealed not only the lack of investment, planning and preparation that underlay the scandalous slowness of the responses by states around the world, but also grotesque class and racial inequalities as it coursed its way through the population and the owners of high-tech corporations were enriched by tens of billions. AWE 2022, AR, VR

It’s already too late to get ahead of this generative AI freight train.

The growing use of AI has already transformed the way the global economy works.

In this backdrop, AI can be used to profile people like you and me to such a detail which may well become more than uncomfortable! And this is no exaggeration.

This is just a tip of the iceberg!Full moon

So what if anything can be done to ensure responsible and ethical practices in the field.

Concern over AI development has accelerated in recent months following the launch of OpenAI’s ChatGPT last year, which sparked the release of similar chatbots by other companies, including Google, Snap and TikTok. The growing realization that vast numbers of people can be fooled by the content chatbots gleefully spit out, now the clock is ticking to not just the collapse of values that enshrine human life but the very existence of the human race.

“This is not the future we want.”

Now there is no option but to put in place international laws, not mandatory regulations, before AI is infringing human rights. However as we are witnessing with climate change, to achieve any global cooperation is a bit of a problem.

From the climate crisis to our suicidal war on nature and the collapse of biodiversity, our global response is too little, too late. Technology is moving ahead without guard rails to protect us from its unforeseen consequences.

So we have two contrasting futures one of breakdown and perpetual crisis, and another in which there is a breakthrough, to a greener, safer future. This approach would herald a new era for multilateralism, in which countries work together to solve global problems.

In order to achieve these aims, the Secretary-General of the United nations recommends a Summit of the Future, which would “forge a new global consensus on what our future should look like, and how we can secure it”. The need for international co-operation beyond borders is something that makes a lot of sense, especially these days, because the role of the modern corporation in influencing the impact of AI is in conflict with the common values needed to survive.

The principle of working together, recognizing that we are bound to each other and that no community or country, however powerful, can solve its challenges alone.” Any national government is, of course, guided by its own set of localised values and realities.

But geopolitics, I would argue, always underlies any ambition. The immaturity of the ‘Geopolitics of AI’ field leaves the picture incomplete and unclear so it requires the introduction of agreed international common laws.

Let Ireland hold such a Summit.

This summit could coordinate efforts to bring about inclusive and sustainable policies that enable countries to offer basic services and social protection to their citizens with universal laws that defines the several capabilities of AI i.e. identify the ones that are more susceptible to misuse than the others.

(It is incredibly important for understanding the current environment in which any product is built or research conducted and it will be critical to forging a path forwards and towards safe and beneficial AI.)

The challenges are great, and the lessons of the past cannot be simply superimposed onto the present.

For example.

The designers of AI technologies should satisfy legal requirements for safety, accuracy and efficacy for well-defined use cases or indications. In the context of health care, this means that humans should remain in control of health-care systems and medical decisions; privacy and confidentiality should be protected, and patients must give valid informed consent through appropriate legal frameworks for data protection.

Another For example the collection of Data which is the backbone of AI.

Transparency requires that sufficient information be published or documented before the design or deployment of an AI technology. Such information must be easily accessible and facilitate meaningful public consultation and debate on how the technology is designed and how it should or should not be used.

It is the responsibility of stakeholders to ensure that they are used under appropriate conditions and by appropriately trained people. Effective mechanisms should be available for questioning and for redress for individuals and groups that are adversely affected by decisions based on algorithms.

Laws to ensure that AI systems be designed to minimize their environmental consequences and increase energy efficiency.

If we want the elimination of black-box approach through mandatory explain ability for AI – Agreed or not agree should not be an option.

While AI can be extraordinarily useful it is already out of control with self learning algorithms that no one can understand or to be brought to account.

These profit seeking skewed algorithms owned by corporations are causing racial and gender-based discrimination.Following billions of dollars in investment, a major corporate rebrand and a pivot to focus on the metaverse, Meta and Zuckerberg still have little to show for it.

I firmly believe that the Government must engage in meaningful dialogues with other countries on a common international laws that are now needed to subject developers to a rigorous evaluation process, and to ensure that entities using the technology act responsibly and are held accountable.

Having said that, governments must keep their roles limited and not assume absolute powers.

Multiple actors are jostling to lead the regulation of AI.

The question business leaders should be focused on at this moment, however, is not how or even when AI will be regulated, but by whom.

Governments have historically had trouble attracting the kind of technical expertise required even to define the kinds of new harms LLMs and other AI applications may cause.

Perhaps a licensing framework is needed to strike a balance between unlocking the potential of AI and addressing potential risks.

Or

AI ‘Nutrition Labels’ that would explain exactly what went into training an AI, and which would help us understand what a generative AI produces and why.

Or

Take the Meta’s open source approach which contrasts sharply with the more cautious, secretive inclinations of OpenAI and Google. With Open Source models like this and Stable Diffusion already out there, it may be impossible to get the Genie back into the bottle.

The metaverse is not well understood or appreciated by the media and the public. The metaverse is much, much bigger than one company, and weaving them together only complicates the matter.

Governments should never again face a choice between serving their people or servicing their debt.

Still, the most promising way not to provoke the sorcerer would be to avoid making too big a mess in the first place.

All human comments appreciated. All like clicks and abuse chucked in the bin

Contact: bobdillon33@gmail.com

Share this:

  • Share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Share on Pocket (Opens in new window) Pocket
  • Share on Mastodon (Opens in new window) Mastodon

THE BEADY EYE SAYS; SO YOU ARE NOW 30 BY THE TIME YOU ARE 70 HERE IS WHAT A DAY IN YOUR LIFE WILL LOOK LIKE.

10 Friday Jan 2020

Posted by bobdillon33@gmail.com in 2020: The year we need to change., Algorithms., Artificial Intelligence., Communication., Dehumanization., Digital age., DIGITAL DICTATORSHIP., Digital Friendship., Evolution, Fourth Industrial Revolution., HUMAN INTELLIGENCE, Humanity., Life., Modern day life., Our Common Values., Reality., Robot citizenship., Social Media, Sustaniability, Technology, The common good., The essence of our humanity., The Future, The Obvious., The state of the World., The world to day., Unanswered Questions., WHAT IS TRUTH, What Needs to change in the World, Where's the Global Outrage.

≈ Comments Off on THE BEADY EYE SAYS; SO YOU ARE NOW 30 BY THE TIME YOU ARE 70 HERE IS WHAT A DAY IN YOUR LIFE WILL LOOK LIKE.

Tags

0.05% Aid Commission, Age of Uncertainty, AI, AI systems., Algorithms., Artificial Intelligence., Artificial life.

(Twenty-minute read)

The Dead Sea will be almost completely dried up, nearly half of the Amazon rainforest will have been deforested, wildfires will spread like, umm, wildfire, and the polar ice caps will be only 60 per cent the size they are now.

Wars will involve not only land and sea but space. Superhurricanes will become a regular occurrence.

Should you be worried, of course not AI/Algorithms are here to guide you.

AI-related advancements have grown from strength to strength in the last decade.

Right now there are people coming up with new algorithms by applying evolutionary techniques to the vast amounts of big data via genetic programming to find optimisations and improve your life in different fields.

The amount of data we have available to us now means that we can no longer think in discrete terms. This is what big data forces us to do.

It forces us to take a step back, an abstract step back to find a way to cope with the tidal wave of data flooding our systems. With big data, we are looking for patterns that match the data and algorithms are enabling us to find patterns via clustering, classification, machine learning and any other number of new techniques.

To find the patterns you or I cannot see. They create the code we need to do this and give birth to learner algorithms that can be used to create new algorithms.

So do you remember a time, initially, when it was possible to pass on all knowledge through the form of dialogue from generation to generation, parent to child, teacher to student?  Indeed, the character of Socrates in Plato’s “Phaedrus” worried that this technological shift to writing and books was a much poorer medium than dialogue and would diminish our ability to develop true wisdom and knowledge.

Needless to say that I don’t think Socrates would have been a fan of Social Media or TV.

The machine learning algorithms have become like a hammer at the hands of data scientists. Everything looks like a nail to be hit upon.

In due process, the wrong application or overkill of machine learning will cause disenchantment among people when it does not deliver value.

It will be a self-inflicted  ‘AI Winter’.

So here is what your day at 70th might be.

Welcome to the world of permanent change—a world defined not by heavy industrial machines that are modified infrequently, but by software that is always in flux.

Algorithms are everywhere. They decide what results you see in an internet search, and what adverts appear next to them. They choose which friends you hear from on social networks. They fix prices for air tickets and home loans. They may decide if you’re a valid target for the intelligence services. They may even decide if you have the right to vote.

7.30 am 

Personalised Health Algorithm report.

Sleep pattern good. Anxiety normal, deficient in vitamin C. Sperm count normal.

Results of body scan sent health network.

7.35 am

House Management Algorithm Report.

Temperature 65c. House secure. Windows/ Doors closed Catflap open. Heating off. Green Energy usage 2.3 Kwh per minute. (Advertisement to change provider.) Shower running, Water flow and temperature adjusted, shower head hight adjusted. House Natural light adjusted. Confirmation that smartphone and I pad fully charges. Robotic housemaid programmed.

8 am.

Personalised Shopping/Provisions Algorithm report.

Refrigerators will be seamlessly integrated with online supermarkets, so a new tub of peanut butter will be on its way to your door by drone delivery before you even finish the last one.

8.45 am. Appointments Algorithm.

Virtual reality appointment with a local doctor.

Voice mails and emails and the calendar check.

A device in your head might eliminate the need for a computer screen by projecting images (from a Skype meeting, a video game, or whatever) directly into your field of vision from within. It checks

9 am.

Personalised Financial Algorithm.

Balance of credit cards and bank accounts including citizen credit /loyalty points. Value of shares/ pension fund updated.

10 am. Still in your Dressing gown.

11 am.  The self-drive car starts. Seats automatically shift and rearrange themselves to provide maximum comfort. Personalised News and Weather Algorithm gives a report. The car books parking spot places order for coffee. Over coffee, you rent out a robot in Dublin and have it do the legwork for your forthcoming visiting – hotels.

12 pm.

Hologram of your boss in your living room.

1 pm.

Virtual work meeting to discuss the solitary nature of remote work.

Face-to-face meeting arranged.

 

2 pm. Home. Lunch delivered.

3 pm. Sporting activity with a virtual coach.

5 pm. Home

7 30 pm.

Discuss and view the Dubin robot walk around containing video and audio report. 

Dinner delivered. Six quests. The home management algorithm rearranges the furniture.

8 30 pm

Virtual helmets on for some after-dinner entertainment.

10 pm 

Ask Alixia to shut the house down not before you answer Alixia question to score points and a chance to win — Cash- Holiday- Dinner for two- a discount on Amazon- e bay- or a spot of online gambling.

                                                       ———

The fourth industrial revolution is not simply an opportunity. It matters what kind of opportunity is for whom and under what terms.

We need to start thinking about algorithms.

The core issue here is of course who will own the basic infrastructure of our future which is going to be effect all sectors of society.

They are not just for mathematicians or academics. There are algorithms all around us and you don’t need to know how to code to use them or understand them.

We need to better understand them to better understand, and control, our own futures. To achieve this we need to better understand how these algorithms work and how to tailor them to suit our needs. Otherwise, we will be unable to fully unlock the potential of this abstract transition because machine learning automates automation itself.

The new digital economy, akin to learning to read, has obscured our view of algorithms. Algorithms are increasingly part of our everyday lives, from recommending our films to filtering our news and finding our partners.

Building a solid foundation now for governance for AI the need to use AI responsibly
and to consider the broader reaching implications of this transformational technology’s use.

The world population will be over 9 billion with the majority of people will live in cities.

So here are a few questions at 30 you might want to consider.

How does the software we use influence what we express and imagine?

Shall we continue to accept the decisions made for us by algorithms if we don’t know how they operate?

What does it mean to be a citizen of a software society?

These and many other important questions are waiting to be analyzed.

If we reduce each complex system to a one-page description of its algorithm, will we capture enough of software behaviour?

Or will the nuances of particular decisions made by software in every particular case be lost?

You don’t need a therapist; they need an algorithm.

We may never really grasp the alienness of algorithms. But that doesn’t mean we can’t learn to live with them.

Unfortunately, their decisions can run counter to our ideas of fairness. Algorithms don’t see humans the same way other humans do.

What are we doing about confronting any of this –  Nothing much.

So its no wonder that people start to worry about what’s left for human beings to do.

All human comments appreciated. All like clicks and abuse chucked in the bin.

← Back

Thank you for your response. ✨

Share this:

  • Share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Share on Pocket (Opens in new window) Pocket
  • Share on Mastodon (Opens in new window) Mastodon

Will emerging technology save us or destroy us all.

07 Tuesday Oct 2014

Posted by bobdillon33@gmail.com in Uncategorized

≈ Comments Off on Will emerging technology save us or destroy us all.

Tags

AI, Bio-engineered, Nano biotechnology, Nanotechnology

 

A thought!

It has being a summer of bad news.

Geopolitical turmoil, Carbon spewing into the atmosphere, Ebola spreading, Global warming, Species and Forests disappearing, Ice melting, Inequality rampant, while Sovereign Wealth Funds privatize the world resources for profit.

All of this pales in comparison to what could be inflicted by high-tech nightmares that are awaiting in the long grass.

Bio-engineered pandemic, nanotechnology or Nano biotechnology gone haywire, AI run amok all could kill far quicker than ISIS or Capitalism.

Accidental self-destruction by any of the above is more than possible.

When you realize that there are people out there experimenting in their garden sheds. Recently Professor Yoshihiro Kawaoka of the university of Wisconsin engineered a strain of the deadly virus Swine flu than could evade the immune system. At least he did it in a proper Laboratory.

Nanotech is endeavoring to engineer microscopic factories of self-replicating bots with the power to make anything out of common materials.

If this was to happen we could have omnivorous bacteria that would wipe out real bacteria that could spread like pollen reducing the biosphere to dust.

AI on the other had if we get it right could be the best thing to happen in the universe, but get it wrong we wont be colonizing anywhere.

At the moment we spend more on lipstick than making sure our species survives.

Maybe its time we created another one of those useless World organisations to monitor emerging technology just in case it comes back to bit us all.

           

 

Share this:

  • Share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Share on Pocket (Opens in new window) Pocket
  • Share on Mastodon (Opens in new window) Mastodon
Newer posts →

All comments and contributions much appreciated

  • THE BEADY EYE SAYS. ANY OTHER PERSON WOULD BE ARRESTED. February 1, 2026
  • THE BEADY EYE SAYS FROM THE RESURRECTION OF JESUS TO THE PRESENT DAY THE HISTORICAL RECORD OF OUR WORLD IS MORE THAN HORRIBLE. February 1, 2026
  • THE BEADY EYE SAYS: THE WORLD WE LIVE IN IS BECOMING MORE AND MORE UNKNOWN. January 31, 2026
  • THE BEADY ASK. IN THIS WORLD OF FRICTIONS IS THERE ANY DECENCY LEFT ? January 29, 2026
  • THE BEADY EYE ASKS ARE WE WITH ARTIFICIAL INTELLIGENCE LOOSING THE MEANING OF OUR LIVES? January 27, 2026

Archives

  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013

Talk to me.

Jason Lawrence's avatarJason Lawrence on THE BEADY EYE ASK’S: WIT…
benmadigan's avatarbenmadigan on THE BEADY EYE ASK’S: WHA…
bobdillon33@gmail.com's avatarbobdillon33@gmail.co… on THE BEADY EYE SAYS: WELCOME TO…
Ernest Harben's avatarOG on THE BEADY EYE SAYS: WELCOME TO…
benmadigan's avatarbenmadigan on THE BEADY EYE SAY’S. ONC…

7/7

Moulin de Labarde 46300
Gourdon Lot France
0565416842
Before 6pm.

My Blog; THE BEADY EYE.

My Blog; THE BEADY EYE.
bobdillon33@gmail.com

bobdillon33@gmail.com

Free Thinker.

View Full Profile →

Follow bobdillon33blog on WordPress.com

Blog Stats

  • 95,078 hits

Blogs I Follow

  • unnecessary news from earth
  • The Invictus Soul
  • WordPress.com News
  • WestDeltaGirl's Blog
  • The PPJ Gazette
Follow bobdillon33blog on WordPress.com
Follow bobdillon33blog on WordPress.com

The Beady Eye.

The Beady Eye.
Follow bobdillon33blog on WordPress.com

Create a free website or blog at WordPress.com.

unnecessary news from earth

WITH MIGO

The Invictus Soul

The only thing worse than being 'blind' is having a Sight but no Vision

WordPress.com News

The latest news on WordPress.com and the WordPress community.

WestDeltaGirl's Blog

Sharing vegetarian and vegan recipes and food ideas

The PPJ Gazette

PPJ Gazette copyright ©

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy
  • Subscribe Subscribed
    • bobdillon33blog
    • Join 222 other subscribers
    • Already have a WordPress.com account? Log in now.
    • bobdillon33blog
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar