• About
  • THE BEADY EYE SAY’S : THE EUROPEAN UNION SHOULD THANK ENGLAND FOR ITS IN OR OUT REFERENDUM.

bobdillon33blog

~ Free Thinker.

bobdillon33blog

Tag Archives: AI systems.

THE BEADY EYE ASK’S: ARE WE ALL SO DUMB TO THINK THAT ARTIFICAL INTELLIGENCE CAN BE REGULATED?

02 Friday Jun 2023

Posted by bobdillon33@gmail.com in Uncategorized

≈ Comments Off on THE BEADY EYE ASK’S: ARE WE ALL SO DUMB TO THINK THAT ARTIFICAL INTELLIGENCE CAN BE REGULATED?

Tags

Age of Uncertainty, AI, AI regulations, AI systems., Algorithms., Technology, The Future of Mankind, Visions of the future.

( Three minute read)

Artificial intelligence is already suffering from three key issues: privacy, bias and discrimination, which if left unchecked can start infringing on – and ultimately take control of – people’s lives.

As digital technology became integral to the capitalist market dystopia of the first decades of the 21st century, it not only refashioned our ways of communicating but of working and consuming, indeed ways of living.

Then along came the the Covid-19 pandemic which revealed not only the lack of investment, planning and preparation that underlay the scandalous slowness of the responses by states around the world, but also grotesque class and racial inequalities as it coursed its way through the population and the owners of high-tech corporations were enriched by tens of billions. AWE 2022, AR, VR

It’s already too late to get ahead of this generative AI freight train.

The growing use of AI has already transformed the way the global economy works.

In this backdrop, AI can be used to profile people like you and me to such a detail which may well become more than uncomfortable! And this is no exaggeration.

This is just a tip of the iceberg!Full moon

So what if anything can be done to ensure responsible and ethical practices in the field.

Concern over AI development has accelerated in recent months following the launch of OpenAI’s ChatGPT last year, which sparked the release of similar chatbots by other companies, including Google, Snap and TikTok. The growing realization that vast numbers of people can be fooled by the content chatbots gleefully spit out, now the clock is ticking to not just the collapse of values that enshrine human life but the very existence of the human race.

“This is not the future we want.”

Now there is no option but to put in place international laws, not mandatory regulations, before AI is infringing human rights. However as we are witnessing with climate change, to achieve any global cooperation is a bit of a problem.

From the climate crisis to our suicidal war on nature and the collapse of biodiversity, our global response is too little, too late. Technology is moving ahead without guard rails to protect us from its unforeseen consequences.

So we have two contrasting futures one of breakdown and perpetual crisis, and another in which there is a breakthrough, to a greener, safer future. This approach would herald a new era for multilateralism, in which countries work together to solve global problems.

In order to achieve these aims, the Secretary-General of the United nations recommends a Summit of the Future, which would “forge a new global consensus on what our future should look like, and how we can secure it”. The need for international co-operation beyond borders is something that makes a lot of sense, especially these days, because the role of the modern corporation in influencing the impact of AI is in conflict with the common values needed to survive.

The principle of working together, recognizing that we are bound to each other and that no community or country, however powerful, can solve its challenges alone.” Any national government is, of course, guided by its own set of localised values and realities.

But geopolitics, I would argue, always underlies any ambition. The immaturity of the ‘Geopolitics of AI’ field leaves the picture incomplete and unclear so it requires the introduction of agreed international common laws.

Let Ireland hold such a Summit.

This summit could coordinate efforts to bring about inclusive and sustainable policies that enable countries to offer basic services and social protection to their citizens with universal laws that defines the several capabilities of AI i.e. identify the ones that are more susceptible to misuse than the others.

(It is incredibly important for understanding the current environment in which any product is built or research conducted and it will be critical to forging a path forwards and towards safe and beneficial AI.)

The challenges are great, and the lessons of the past cannot be simply superimposed onto the present.

For example.

The designers of AI technologies should satisfy legal requirements for safety, accuracy and efficacy for well-defined use cases or indications. In the context of health care, this means that humans should remain in control of health-care systems and medical decisions; privacy and confidentiality should be protected, and patients must give valid informed consent through appropriate legal frameworks for data protection.

Another For example the collection of Data which is the backbone of AI.

Transparency requires that sufficient information be published or documented before the design or deployment of an AI technology. Such information must be easily accessible and facilitate meaningful public consultation and debate on how the technology is designed and how it should or should not be used.

It is the responsibility of stakeholders to ensure that they are used under appropriate conditions and by appropriately trained people. Effective mechanisms should be available for questioning and for redress for individuals and groups that are adversely affected by decisions based on algorithms.

Laws to ensure that AI systems be designed to minimize their environmental consequences and increase energy efficiency.

If we want the elimination of black-box approach through mandatory explain ability for AI – Agreed or not agree should not be an option.

While AI can be extraordinarily useful it is already out of control with self learning algorithms that no one can understand or to be brought to account.

These profit seeking skewed algorithms owned by corporations are causing racial and gender-based discrimination.Following billions of dollars in investment, a major corporate rebrand and a pivot to focus on the metaverse, Meta and Zuckerberg still have little to show for it.

I firmly believe that the Government must engage in meaningful dialogues with other countries on a common international laws that are now needed to subject developers to a rigorous evaluation process, and to ensure that entities using the technology act responsibly and are held accountable.

Having said that, governments must keep their roles limited and not assume absolute powers.

Multiple actors are jostling to lead the regulation of AI.

The question business leaders should be focused on at this moment, however, is not how or even when AI will be regulated, but by whom.

Governments have historically had trouble attracting the kind of technical expertise required even to define the kinds of new harms LLMs and other AI applications may cause.

Perhaps a licensing framework is needed to strike a balance between unlocking the potential of AI and addressing potential risks.

Or

AI ‘Nutrition Labels’ that would explain exactly what went into training an AI, and which would help us understand what a generative AI produces and why.

Or

Take the Meta’s open source approach which contrasts sharply with the more cautious, secretive inclinations of OpenAI and Google. With Open Source models like this and Stable Diffusion already out there, it may be impossible to get the Genie back into the bottle.

The metaverse is not well understood or appreciated by the media and the public. The metaverse is much, much bigger than one company, and weaving them together only complicates the matter.

Governments should never again face a choice between serving their people or servicing their debt.

Still, the most promising way not to provoke the sorcerer would be to avoid making too big a mess in the first place.

All human comments appreciated. All like clicks and abuse chucked in the bin

Contact: bobdillon33@gmail.com

Share this:

  • Click to share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Mastodon (Opens in new window) Mastodon

THE BEADY EYE SAYS; SO YOU ARE NOW 30 BY THE TIME YOU ARE 70 HERE IS WHAT A DAY IN YOUR LIFE WILL LOOK LIKE.

10 Friday Jan 2020

Posted by bobdillon33@gmail.com in 2020: The year we need to change., Algorithms., Artificial Intelligence., Communication., Dehumanization., Digital age., DIGITAL DICTATORSHIP., Digital Friendship., Evolution, Fourth Industrial Revolution., HUMAN INTELLIGENCE, Humanity., Life., Modern day life., Our Common Values., Reality., Robot citizenship., Social Media, Sustaniability, Technology, The common good., The essence of our humanity., The Future, The Obvious., The state of the World., The world to day., Unanswered Questions., WHAT IS TRUTH, What Needs to change in the World, Where's the Global Outrage.

≈ Comments Off on THE BEADY EYE SAYS; SO YOU ARE NOW 30 BY THE TIME YOU ARE 70 HERE IS WHAT A DAY IN YOUR LIFE WILL LOOK LIKE.

Tags

0.05% Aid Commission, Age of Uncertainty, AI, AI systems., Algorithms., Artificial Intelligence., Artificial life.

(Twenty-minute read)

The Dead Sea will be almost completely dried up, nearly half of the Amazon rainforest will have been deforested, wildfires will spread like, umm, wildfire, and the polar ice caps will be only 60 per cent the size they are now.

Wars will involve not only land and sea but space. Superhurricanes will become a regular occurrence.

Should you be worried, of course not AI/Algorithms are here to guide you.

AI-related advancements have grown from strength to strength in the last decade.

Right now there are people coming up with new algorithms by applying evolutionary techniques to the vast amounts of big data via genetic programming to find optimisations and improve your life in different fields.

The amount of data we have available to us now means that we can no longer think in discrete terms. This is what big data forces us to do.

It forces us to take a step back, an abstract step back to find a way to cope with the tidal wave of data flooding our systems. With big data, we are looking for patterns that match the data and algorithms are enabling us to find patterns via clustering, classification, machine learning and any other number of new techniques.

To find the patterns you or I cannot see. They create the code we need to do this and give birth to learner algorithms that can be used to create new algorithms.

So do you remember a time, initially, when it was possible to pass on all knowledge through the form of dialogue from generation to generation, parent to child, teacher to student?  Indeed, the character of Socrates in Plato’s “Phaedrus” worried that this technological shift to writing and books was a much poorer medium than dialogue and would diminish our ability to develop true wisdom and knowledge.

Needless to say that I don’t think Socrates would have been a fan of Social Media or TV.

The machine learning algorithms have become like a hammer at the hands of data scientists. Everything looks like a nail to be hit upon.

In due process, the wrong application or overkill of machine learning will cause disenchantment among people when it does not deliver value.

It will be a self-inflicted  ‘AI Winter’.

So here is what your day at 70th might be.

Welcome to the world of permanent change—a world defined not by heavy industrial machines that are modified infrequently, but by software that is always in flux.

Algorithms are everywhere. They decide what results you see in an internet search, and what adverts appear next to them. They choose which friends you hear from on social networks. They fix prices for air tickets and home loans. They may decide if you’re a valid target for the intelligence services. They may even decide if you have the right to vote.

7.30 am 

Personalised Health Algorithm report.

Sleep pattern good. Anxiety normal, deficient in vitamin C. Sperm count normal.

Results of body scan sent health network.

7.35 am

House Management Algorithm Report.

Temperature 65c. House secure. Windows/ Doors closed Catflap open. Heating off. Green Energy usage 2.3 Kwh per minute. (Advertisement to change provider.) Shower running, Water flow and temperature adjusted, shower head hight adjusted. House Natural light adjusted. Confirmation that smartphone and I pad fully charges. Robotic housemaid programmed.

8 am.

Personalised Shopping/Provisions Algorithm report.

Refrigerators will be seamlessly integrated with online supermarkets, so a new tub of peanut butter will be on its way to your door by drone delivery before you even finish the last one.

8.45 am. Appointments Algorithm.

Virtual reality appointment with a local doctor.

Voice mails and emails and the calendar check.

A device in your head might eliminate the need for a computer screen by projecting images (from a Skype meeting, a video game, or whatever) directly into your field of vision from within. It checks

9 am.

Personalised Financial Algorithm.

Balance of credit cards and bank accounts including citizen credit /loyalty points. Value of shares/ pension fund updated.

10 am. Still in your Dressing gown.

11 am.  The self-drive car starts. Seats automatically shift and rearrange themselves to provide maximum comfort. Personalised News and Weather Algorithm gives a report. The car books parking spot places order for coffee. Over coffee, you rent out a robot in Dublin and have it do the legwork for your forthcoming visiting – hotels.

12 pm.

Hologram of your boss in your living room.

1 pm.

Virtual work meeting to discuss the solitary nature of remote work.

Face-to-face meeting arranged.

 

2 pm. Home. Lunch delivered.

3 pm. Sporting activity with a virtual coach.

5 pm. Home

7 30 pm.

Discuss and view the Dubin robot walk around containing video and audio report. 

Dinner delivered. Six quests. The home management algorithm rearranges the furniture.

8 30 pm

Virtual helmets on for some after-dinner entertainment.

10 pm 

Ask Alixia to shut the house down not before you answer Alixia question to score points and a chance to win — Cash- Holiday- Dinner for two- a discount on Amazon- e bay- or a spot of online gambling.

                                                       ———

The fourth industrial revolution is not simply an opportunity. It matters what kind of opportunity is for whom and under what terms.

We need to start thinking about algorithms.

The core issue here is of course who will own the basic infrastructure of our future which is going to be effect all sectors of society.

They are not just for mathematicians or academics. There are algorithms all around us and you don’t need to know how to code to use them or understand them.

We need to better understand them to better understand, and control, our own futures. To achieve this we need to better understand how these algorithms work and how to tailor them to suit our needs. Otherwise, we will be unable to fully unlock the potential of this abstract transition because machine learning automates automation itself.

The new digital economy, akin to learning to read, has obscured our view of algorithms. Algorithms are increasingly part of our everyday lives, from recommending our films to filtering our news and finding our partners.

Building a solid foundation now for governance for AI the need to use AI responsibly
and to consider the broader reaching implications of this transformational technology’s use.

The world population will be over 9 billion with the majority of people will live in cities.

So here are a few questions at 30 you might want to consider.

How does the software we use influence what we express and imagine?

Shall we continue to accept the decisions made for us by algorithms if we don’t know how they operate?

What does it mean to be a citizen of a software society?

These and many other important questions are waiting to be analyzed.

If we reduce each complex system to a one-page description of its algorithm, will we capture enough of software behaviour?

Or will the nuances of particular decisions made by software in every particular case be lost?

You don’t need a therapist; they need an algorithm.

We may never really grasp the alienness of algorithms. But that doesn’t mean we can’t learn to live with them.

Unfortunately, their decisions can run counter to our ideas of fairness. Algorithms don’t see humans the same way other humans do.

What are we doing about confronting any of this –  Nothing much.

So its no wonder that people start to worry about what’s left for human beings to do.

All human comments appreciated. All like clicks and abuse chucked in the bin.

Go back

Your message has been sent

Warning
Warning
Warning
Warning

Warning.

Share this:

  • Click to share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Mastodon (Opens in new window) Mastodon

THE BEADY EYE ASKS: WHO IS GOING TO BE RESPONSIBLE WHEN ARTIFICIAL INTELLIGENCE GOES WRONG.

02 Thursday Mar 2017

Posted by bobdillon33@gmail.com in Artificial Intelligence.

≈ Comments Off on THE BEADY EYE ASKS: WHO IS GOING TO BE RESPONSIBLE WHEN ARTIFICIAL INTELLIGENCE GOES WRONG.

Tags

AI systems., Artificial Intelligence., Computer technology, Current world problems, Machine learning., quantum computing, Robots., Smart machine, The Future of Mankind

 

( Twelve minute read for all programmers, code writers.)

I think most people are worrying about the wrong things when they worry about Robots and AI.Résultat de recherche d'images pour "pictures of legal robots"

However with AI and robotics positioned to impact all areas of society, we are remiss not to set things in motion now to prepare for a very different world in the future.

The danger is not AI itself but rather what people do with the AI. The repercussions of AI technology is going to be profound, limited by biological evolution we will be unable to keep up.

So we were all making a very basic mistake when it comes to ARTIFICIAL INTELLIGENCE.

Like every advance in technology AI has the potential to do amazing things, on the other hand it also has the potential to do dangerous things and there is little that can be done to stop or rectify it once it’s unleashed. For example its use in weaponizing.

(Recently I read where it is now almost possible to physically create a computer made of DNA using DNA molecules. A computer that can be programmed to compute anything any other device can process.

Electronic computers are a form of UTM, but no quantum UTM has yet been built, if built it will outperform all standard computers on significant practical problems. This ‘magical’ property is possible because the computer’s processors are made of DNA rather than silicon chips. All electronic computers have a fixed number of chips.

So what?

As DNA molecules are very small a desktop computer could potentially utilize more processors than all the electronic computers in the world combined – and therefore outperform the world’s current fastest supercomputer, while consuming a tiny fraction of its energy.

It will definitely bring about moral and philosophical issues that we should be concerned about right now.)

Back to today:

It’s no longer what or when Artificial Intelligence will change our lives, but how or what and who is going to be help responsible.

We are at a crossroads. We need to make decisions. We must re-invent our future.

It is the role of AI in future, truly hybrid societies, or socio-cognitive-technical systems, that will be the real game changer.

The real potential of AI includes not only the development of intelligent machines and learning robots, but also how these systems influence our social and even biological habits, leading to new forms of organization, perception and interaction.

In other words, AI will extend and therefore change our minds.

Robots are things we build, and so we can pick their goals and behaviours.  Both buyers and builders ought to pick those goals sensibly, but people who will use and buy AI should know what the risks really are.

Understanding human behaviour may be the greatest benefit of artificial intelligence if it helps us find ways to reduce conflict and live sustainably.

However, knowing fully well what an individual person is likely to do in a particular situation is obviously a very, very great power.  Bad applications of this power include the deliberate addiction of customers to a product or service, skewing vote outcomes through disenfranchising some classes of voters by convincing them their votes don’t matter, and even just old-fashioned stalking.

Machines might learn to predict our every move or purchase, or governments might try to put the blame robots for their own unethical policy decisions.

It’s pretty easy to guess when someone will be somewhere these days.

Robots, Artificial Intelligence programs, machine learning, you name it, all seem to be responsible for themselves.

However increasingly our control of machines and devices is delegated, not direct. That fact needs to be at least sufficiently transparent that we can handle the cases when components of  systems our lives depend on go wrong.

In fact, robots belong to us. People, governments and companies build, own and program robots. Whoever owns and operates a robot should be responsible for what it does.Résultat de recherche d'images pour "pictures of legal robots" AI systems must do what we want them to do.

In humans consciousness and ethics are associated with our morality, but that is because of our evolutionary and cultural history.  In artefacts, moral obligation is not tied by either logical or mechanical necessity to awareness or feelings.  This is one of the reasons we shouldn’t make AI responsible: we can’t punish it in a meaningful way, because good AI systems are designed to be modular, so the “pain” of punishment could always be excised, unlike in nature.

We must get over our over-identification with AI systems and start demanding that all Technologies that is not designed for the betterment of humanity and the world we live in be verify AI safe and companies need to make the AI they are inserting in their products visible.

We need a world Organisation that is totally transparent and accountable to VET all technology to ensure that :

To minimise social disruption and maximise social utility.

  • Robots should not be designed as weapons, except for national security reasons.
  • Robots should be designed and operated to comply with existing law, including privacy.
  • Robots are products: as with other products, they should be designed to be safe and secure.
  • Robots are manufactured artefacts: the illusion of emotions and intent should not be used to exploit vulnerable users.
  • It should be possible to find out who is responsible for any robot.
  • Robots should not be human-like because they will necessarily be owned.
  • Robots do not need to have a gender. We should consider how our technology reflects our expectations of gender. Who are the users, and who gets used?
  • We should not creating a legal status for robots that will dub them as “electronic persons,” implying that machines will have legal rights and obligations to fulfill. This means that robots will have to take responsibility for decisions they make, especially if they have autonomy.
  • We should insist on a kill switch for all robots that would shut down all functions if necessary.
  •  We should have restrictions on robots to ensure they obey all commands unless those commands would force them to physically do harm to humans or themselves through action or inaction.
  • We should not use robots to reason about what it means to be human, calling them “human” dehumanize real people.  Worse, it gives people the excuse to blame robots for their actions, when really anything a robot does is entirely our own responsibility.

There are also ethical issues with AI, but they are all the same issues we have with other artifacts we build and value or rely on, such as fine art or sewage plants.

  • Yesterday, the European Parliament’s legal affairs committee voted to pass a report urging the drafting of a set of regulations to govern the use and creation of robots and AI.
  • legal liability may need to be proportionate to its level of autonomy and “education,” with the owners of robots with longer training periods held more responsible for those robots’ actions.
  • A big part of the responsibility also rests on the designers behind these sophisticated machines, with the report suggesting more careful monitoring and transparency. This can be done by providing access to source codes and registration of machines. The forming an ethics committee, where creators might be required to present their designs before they build them.
  • We should have to have a league of programmers dedicated to opposing the misuse of AI technology to exploit people’s natural emotional empathy.

As AI gets better, these issues have gotten more serious.

So to wrap up this blog :

First, here are many reasons not to be worry. However it is not enough for experts to understand the role of AI in society it is also imperative to communicate this understanding to non-experts.

Secondly, we shouldn’t ever be seen as selling our own data, just leasing it for a particular purpose.

This is the model software companies already use for their products; we should just apply the same legal reasoning to we humans.  Then if we have any reason to suspect our data has been used in a way we didn’t approve, we should be able to prosecute.  That is, the applications of our data should be subject to regulations that protect ordinary citizens from the intrusions of governments, corporations and even friends.

These problems are so hard, they might actually be impossible to solve.

But building and using AI is one way we might figure out some answers. If we have tools to help us think, they might make us smarter. And if we have tools that help us understand how we think, that might help us find ways to be happier.

The idea that robots, being authored by us, will always be owned—is completely bonkers. It is the duty of all of us to make AI researchers ensure that the future impact is beneficial, not making robots into others, but accepting them as part of ourselves – as artefacts of our culture rather than as members of our in group.

Unfortunately, it’s easier to get famous and sell robots if you go around pretending that your robot really needs to be loved, or otherwise really is human – or superhuman!

Just because they are shaped like a human and they’d watched Star Wars, passers-by thought it deserved more ethical consideration than they gave homeless people, who were actually people.

Because we build and own robots, we shouldn’t ever want them to be persons.

I can hear you saying that our society faces many hard problems far more pressing than the advance of Artificial intelligence. AI is here now, and even without AI, our hyperconnected socio-technical culture already creates radically new dynamics and challenges for both human society and our environment.

AI and computer science, particularly machine learning but also HCI, are increasingly able to help out research in the social sciences.  Fields that are benefiting include political science, economics, psychology, anthropology and business / marketing. All true but automation causes economic inequality.

Blaming robots is insane, and taxing the robots themselves is insane.

This is insane because no robot comes spontaneously into being.  Robots are all constructed, and the ones that have impact on the economy are constructed by the rich which is creating a fundamental shift in the power and availability of artificial intelligence, and its impact on everyday lives. It creates a moral hazard to dump responsibility into a pit that you cannot sue or punish.Résultat de recherche d'images pour "pictures of legal robots"

Some people really expected AI to replace humans. These people don’t have enough direct, personal experience of AI to really understand whether or not it was human in the first place.

There is no going back on this, but that isn’t to say society is doomed.

The word “robot” is derived from the Czech word for “slave.”

Lets keep it that way: I am all for Technological self-reproduction – Slaves.

Unless we can re calibrate our tendency to exploit each other, the question may not be whether the human race can survive the machine age – but whether it deserves to.

Go back

Your message has been sent

Warning
Warning
Warning
Warning

Warning.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Share this:

  • Click to share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Mastodon (Opens in new window) Mastodon

All comments and contributions much appreciated

  • THE BEADY EYE SAYS TRUST IS DISAPPEARING THANKS TO OUR INABILITY TO RELATE TO EACH OTHER. December 19, 2025
  • THE BEADY EYE SAYS. THE WORLD NEEDS PEOPLE GOVERNMENT NOT MONEY GOVERNMENTS. December 18, 2025
  • THE BEADY EYE ASKS WHAT ARE WE THE SAME GOING TO DO TO STOP THE WORLD BEING FUCK UP FOR PROFIT BY RIPOFF MERCHANT. December 17, 2025
  • THE BEADY EYE CHRISTMAS GREETING. December 16, 2025
  • THE BEADY EYE SAYS. TO THE NEXT GENERATION TO LIVE A LIFE WORTH WHILE YOU MUST CREATE MEMORIES. December 16, 2025

Archives

  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013

Talk to me.

Jason Lawrence's avatarJason Lawrence on THE BEADY EYE ASK’S: WIT…
benmadigan's avatarbenmadigan on THE BEADY EYE ASK’S: WHA…
bobdillon33@gmail.com's avatarbobdillon33@gmail.co… on THE BEADY EYE SAYS: WELCOME TO…
Ernest Harben's avatarOG on THE BEADY EYE SAYS: WELCOME TO…
benmadigan's avatarbenmadigan on THE BEADY EYE SAY’S. ONC…

7/7

Moulin de Labarde 46300
Gourdon Lot France
0565416842
Before 6pm.

My Blog; THE BEADY EYE.

My Blog; THE BEADY EYE.
bobdillon33@gmail.com

bobdillon33@gmail.com

Free Thinker.

View Full Profile →

Follow bobdillon33blog on WordPress.com

Blog Stats

  • 94,154 hits

Blogs I Follow

  • unnecessary news from earth
  • The Invictus Soul
  • WordPress.com News
  • WestDeltaGirl's Blog
  • The PPJ Gazette
Follow bobdillon33blog on WordPress.com
Follow bobdillon33blog on WordPress.com

The Beady Eye.

The Beady Eye.
Follow bobdillon33blog on WordPress.com

Create a free website or blog at WordPress.com.

unnecessary news from earth

WITH MIGO

The Invictus Soul

The only thing worse than being 'blind' is having a Sight but no Vision

WordPress.com News

The latest news on WordPress.com and the WordPress community.

WestDeltaGirl's Blog

Sharing vegetarian and vegan recipes and food ideas

The PPJ Gazette

PPJ Gazette copyright ©

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy
  • Subscribe Subscribed
    • bobdillon33blog
    • Join 223 other subscribers
    • Already have a WordPress.com account? Log in now.
    • bobdillon33blog
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar