Tags
Artificial Intelligence., Capitalism and Greed, Climate change, The Future of Mankind, Visions of the future.
( Fourteen minute read)
The biggest problem of our world today is not artificial intelligence but natural stupidity!
When it comes to climate change – profit seeking algorithms – and the Military race to send atomist drone killers into the battle field – Welcome to the perplexing world of collective stupidity!
The Trump campaign and Brexit – where we all woke up the next day astounded that “this could happen” are both prime examples of campaigns that leaned heavily on the emotions of anxiety, fear and tribalism. and collective stupidly.
Since then, there has been much unpacking of “what happened” and talk about “it could only have been “stupid” people” who could have voted that way.
But is this true?
Yes, profound lapses in logic can plague even the smartest mind.
There are intelligent people who are stupid. So why the paradox? Stupidity is not a lack of IQ.
Unconscious emotions drive our decisions – Intuitive feelings gave us an evolutionary advantage in caveman days, a survival way of dealing with information overload; and can still play a useful role as we on the precipice of a critical moment with AI.
All over the world, we are in the midst of a great shift. The data revolution has given way to the analytics movement. Press our emotional buttons and our judgement is derailed. Hence the temptation to choose the first solution that comes to mind, even if obviously flawed.
It seems that nothing encourages stupidity more than group culture.
An uncritical dependence on set rules often leads to absurd decisions, the-way-we-do-things-here, often not being the most intelligent way.
And the more intelligent someone is, the more disastrous the results of their stupidity.
————–
With generative AI technologies data-driven insights are reshaping outcomes without needing to write code, becoming truly intrusive, enabling decision-makers, analysts, data scientists and developers to collaborate and develop analytical insights in real time.
SO, WHAT CAN WE DO TO PROTECT OURSELVES FROM DOING STUPID THINGS?
Knowledge of our foolish nature, can help us escape its grasp.
We can step outside the group of Google algorithms knowledge to question where we are at and going.
and revert to culture-thinking that relies on that “everyone knows the true”
Stupidity is all around us. As long as there have been humans there has been human stupidity,
. —————
Over the past decade, we’ve seen the volume of data available to decision-makers grow exponentially.
In this intelligence era, it’s no longer about how much data one company can generate, it’s about how they use it. Corporate leaders, academics, policymakers, and countless others are looking for ways to harness generative AI technology, which has the potential to transform the way we learn, work, and more.
Generative AI is evolving quickly, but to truly get the most benefits from this ground breaking technology, you need to manage the wide array of risks.
Why?
Because generative AI is so powerful and easy to use, it’s poised to change what is real and what is not.
Unlike earlier disruptions, the reality of the generative AI race is already looking out of control.
This could be the first “disruptive” new tech in a long time built and controlled largely by giants in the tech world which could entrench, rather than shake up, the status quo.
Right now, only a handful of companies — including Google, Meta, Amazon and Microsoft (through their $10 billion investment in Open-air) — are responsible for the world’s leading large language models.
So what can policymakers do about AI?
Is there a way to prevent the hottest new technology from simply cementing the power of the tech giants?
Virtual worlds should not become walled gardens.
It is abundantly clear that leaving it to the market to decide how these powerful technologies are used, and by whom, is a very risky proposition.
———
For decades, many of the great scientific and philosophical minds had conceived of creating collective intelligence in the form of a globally connected space to pool our knowledge.
Social Media -Smart phones – are digitalizing citizens and their resulting emergent behaviour.
This is a phenomenon that occurs in complex adaptive systems. In such systems, simple components interact in such a way that the whole becomes greater than the sum of its parts.
Our collective intelligence has now become what can only be referred to as our collective stupidity.
————-
The Dark Side — Collective Stupidity.
Collective stupidity can be perplexing and is often harmless.
How is it possible that a group of smart individuals can sometimes make decisions so perplexing, it feels like the intelligence just evaporated?
How does collective stupidity happen?
Are we are better off by not underestimating the effects of this phenomenon?
A system based on generating clicks and interactions has created an environment for the outlandish and bizarre to flourish, with expertise falling by the wayside.
Broad, anonymous social networks breed collective stupidity.
![]()
In 2023, an estimated 4.9 billion people use social media across the world this number is expected to jump to approximately 5.85 billion users by 2027.
The driving force. The increasing global adoption of 5G technology.
These staggering numbers aren’t just statistics, either. They highlight the expansive influence and potential of social media platforms. Right now, 1.9 billion daily users access Facebook’s platform, Twitter has gained 319 new users per minute in 2020, while 500 hours of video are uploaded to YouTube in the same amount of time. Millions of businesses around the world rely on Facebook to connect with people.
The recent new platform Threads Meta’s new social network, had 100 million sign ups in its first five days.
With this much content being generated, how can experts possibly stand out from the crowd?
By emulating the human ability to forget some of the data, psychological AIs will transform algorithmic accuracy.
Machine learning, on the other hand, typically takes a different path: It sees reasoning as a categorization task with a fixed set of predetermined labels. It views the world as a fixed space of possibilities, enumerating and weighing them all.
Social media networks are not very sociable these days. Feeds are algorithmic, which means you see whatever the apps want to show you.
All this has eroded public confidence.
——–
We all have intelligence and expertise to offer, even if the internet leaves us feeling isolated at times.
With so much misguided thought and active disinformation online, it has become difficult for people with insight worth sharing to do so. Behind the anonymity of the web, anyone can claim to be an expert. When everybody is an expert, nobody is.
With online communities, the relationship between experts and their audience becomes a two-way street.
Many of the issues we throw billions of dollars at and attempt to solve with technology could be easily achieved if we were able to better utilize our collective intelligence.
Technology is the means, not the end; its potential is massive, but not as great as our own.
So we wildly overestimate our access to our own mind.
In essence, the same emergent behaviour that typically helps the group survive sometimes leads to collective stupidity and death.
The Internet gave us the ability to connect with people on a global scale.
But its click-baiting algorithms and lack of regulation also brought with them chaos. As social media came to dominate the landscape, it made using the internet for the purpose of collective intelligence increasingly difficult.
You see, with stupidity, or stupid people for that matter, protesting or reasoning doesn’t really work. This is mainly because of their strong prejudice. They simply disbelieve any facts or reasoning we provide. In most cases, they either simply deny the arguments. And if they can’t, then they call them trivial exceptions.
People are often made stupid under certain circumstances. Maybe they allow this to happen to themselves. It is a group phenomenon.
The nature of stupidity has its roots deep in the subconscious. It is largely driven by the fundamental mechanics of our experience. following the herd. It is arguably the most prominent one, and mostly it does make sense. If the information is lacking, doing what others are doing is probably the best bet. But this doesn’t work all the time.
In fact, herd behaviour is among the pre-eminent causes of stupidity.
It is not that intellect suddenly fails. But people are deprived of inner independence, so they give up autonomous positions under the overwhelming impact. We always feel that we are dealing with slogans, signs, buzzwords, and not with the real person. As if they are under the spell of someone or something.
As this happens, we are also creating (unknowingly) various risks to our socio-economic structure, civilization in general, and to some extent, for the human species.
Species-level risks are not evident yet; However, the other two, socio-economic and civilization level risks, are significant enough to be ignored.
So far, several significant building blocks have been developed and are in progress. When we stitch them together, AI’s capability will increase multifold, which should be a more significant concern for us.
It takes the already tiny amount of time we have to change our ways, and save the planet, and practically cuts it in half.
We have less than 27 years to get our collective act together and reshape how our entire civilisation operates. And I’m not sure if we can do that… The more concerning part is about the risks that we have not thought of yet. We may not be able to avoid all of them, but we can understand them to address them.
Our over-enthusiasm for new technologies has somehow colluded our quality expectations. So much so that we have almost stopped demanding the right quality solutions. We are so fond of this newness that we are ignoring flaws in new technologies.
The problem with these low-quality solutions is that subpar techs’ flaws do not surface until it is too late!
In many cases, the damage is already done and maybe be irreversible.
Misalignment between our goals and the machine’s goals could be dangerous. It is easier to correct a team of humans; doing that with a rampant machine could be a very tricky and arduous task.
Achieving a level of alignment with human-level common sense is quite tricky for a computerized system. Without having any balanced approach like a scorecard, this may not be achievable.
Technology is an answer to the “how” of the strategy, but without having the right “why” and “what” in place, it can do more damage than good. When AI systems do not know why, there will always be a lurking risk of discrimination, bias, or an illogical outcome.
Weapon systems equipped with AI are the most vulnerable to the right AI in wrong hand problems and therefore have the greatest risks. The Russian /Ukrain war is now the labourite of drone warfare. The possibility of AI systems being used to overpower others by some group or a country is a significant risk.
Overall, the right AI’s risk in the wrong hands is one of the critical challenges and warrants substantial attention to avoid it.
Extending AI and automation beyond logical limits could potentially alter our perception of what humans can do.
We still value human interaction, communication skills, emotional intelligence, and several other qualities in humans. What happens when an AI app takes over? What happened to AI doing mundane tasks and leaving time for us to do what we like and love?
The most important thing in artificial intelligence isn’t the fancy algorithms.
Let’s assume the worst case and we have a general purpose AI – that can do everything a human can.
What would happen?
Waiting for smartphone app to tell us what to do next and how we might be feeling now!
The enormous power carried by the grey matter in our heads may become blunt and eventually useless if we never exercise it, turning it into just some slush. The old saying, “use it or lose it,” is explicitly applicable in this case. Half knowledge is more dangerous than ignorance!
Trust me, a lot can happen in 24 hours. The lesson here is – in times like this, the first principles-based thinking is your best bet.
Our problem is that on one side, we have intelligent people, who are full of doubts, and on the other, we have stupid people full of confidence. Stupidity is not an intellectual failing, it’s a moral failing. And it happens because we believe only in feelings and not in facts or truthfulness
When we see and hear all this, we wonder if there is any antidote? If there is any way to stop this from happening?
The ultimate test of a moral society is the kind of world that it leaves to its children.
So the question now is, “How are we going to fight this AI pandemic?”
We will finally recognize that more computing power makes machines faster, not smarter.
If a problem is too difficult for a machine, it is we who will have to adapt to its limited abilities.
There is already a frustrating struggle for humans and machines to understand one another in natural language. Soon, we will live in a world where, regardless of your programming abilities, the main limitations are simply curiosity and imagination.
The Garland Test, inspired by dialog from the movie, is passed when a person feels that a machine has consciousness, even though they know it is a machine.
Will computers pass the Garland Test in 2023? I doubt it. But what I can predict is that claims like this will be made, resulting in yet more cycles of hype, confusion, and distraction from the many problems that even present-day AI is giving rise to.
This will force us to reconsider how our behaviours today might influence digital versions of ourselves set to outlive us.
Faced with this prospect of virtual immortality, 2023 will be the year we broaden our definition of what it means to live forever, a moral question that will fundamentally change how we live our day-to-day lives, but also what it means to be immortal stupid.
We tend to think we are the be all and end all—but we’re not. The sooner we can realize that the natural world goes its way, not our way, the better.” “I hope as a consequence that the needs and wonder and importance of the natural world are seen. We tend to think we are the be all and end all—but we’re not.
We’re both the victims and benefactors, and the sooner we can realize that the natural world goes its way, not our way, the better.” Sir David Attenborough.
All human comments appreciated. All like clicks and abuse chucked in the bin.
Contact: bobdillon33@gmail,com