( Twenty-two minute read)
Let’s be honest:
Humanity is not approaching this issue remotely with the leave of serious it requires.
Given the speed of development in the field, it’s long past time to move beyond a reactive mode, one where we only address AI’s downsides once they’re clear and present.
We can’t only think about today’s systems, but where the entire enterprise is headed.
While continuing to talk in vague terms about the potential economic or scientific benefits of AI, we are perpetuating historical patterns of technological advancement at the expense of not just vulnerable people but all of us.
When we fail to address these harms, as an inevitable by-product of technological progress we are turning a blind eye to the ethical needs in which powerful AI systems are developed and deployed.
The rapid pace of progress is feeding on itself, creating something smarter than us, which may have the ability to deceive and mislead us — and then just hoping it doesn’t want to hurt us — is a terrible plan.
————-
We humans have already wiped out a significant fraction of all the species on Earth.
SHOULD WE BE WORRRIED THAT WE ARE NOW ARE WITH AI ON THE PATHWAY TO EXTERMINATION.
That is what you should expect to happen as a less intelligent species – which is what we are likely to become, given the rate of progress of artificial intelligence.
AI is probably the most important thing humanity has ever worked on.
It’s not simply what AI can do, but where it is going will be key to managing the resultant fear of AI that permeates society. Gargantuan amounts of data are at this very moment been harvested so machine learning can accomplish tasks that had previously been accomplished only by humans.
With deep learning, improving systems doesn’t necessarily involve or require understanding what they’re doing. If anything, as the systems get bigger, interpretability — the work of understanding what’s going on inside AI models, and making sure they’re pursuing our goals rather than their own — gets harder.
We should be clear about what these conversations do and don’t demonstrate.
In a world increasingly dominated by AI-powered tools that can mimic human natural language abilities, what does it mean to be truthful and authentic?
Take GPT-Chat, which is used by millions around the globe, is churning out human-sounding answers to requests, ranging from the practical to the surreal. It is being used by millions of people, many of whom don’t have any training or education about when it is ethical to use these systems or how to ensure that they are not causing harm.
Even if you don’t use AI-generated responses, they influence how you think.
It has drafted cover letters, composed lines of poetry, pretended to be William Shakespeare, crafted messages for dating app users to woo matches, and even written news articles, all with varying results.
Bots now sound so real that it has become impossible for people to distinguish between humans and machines in conversations, which poses huge risks for manipulation and deception at mass scale.
What does it mean for a machine to be deceptive?
Is it evil and plotting to kill us. Rather, the AI model is responding to my command and playing — quite well — the role of a system that’s evil and plotting to kill us.
If the system doesn’t have that intent, is it deceptive? Does it come back to the person that was asking the questions or getting the system to be deceptive? I don’t know.
There are more questions than answers at this point.
The fact that these technologies are limited at the moment is no reason to be reassured.
Ai has the potential to transform and exacerbate the problem of misinformation, and so we need to start working on solutions now.
—————
The trajectory we are on is one where we will make these systems more powerful and more capable.
A new tool called Co-pilot uses machine learning to predict and complete lines of computer code, bringing the possibility of an AI system that could write itself one step closer. DeepMind’s Alpha Fold system, which uses AI to predict the 3D structure of just about every protein in existence.
We need to design systems whose internals we understand and whose goals we are able to shape to be safe ones. However, we currently don’t understand the systems we’re building well enough to know if we’ve designed them safely before it’s too late.
Right now, the state of the safety field is far behind the soaring investment in making AI systems more powerful, more capable, and more dangerous.
These harms are playing out every day, with powerful algorithmic technology being used to mediate our relationships between one another and between ourselves and our institutions and environment.
The reason is that systems designed this way generalize, meaning they can do things outside what they were trained to do.
These questions around authenticity, deception, and trust are going to be incredibly important, and we need a lot more research to help us understand how AI will influence how we interact with other humans.
If you have machines that control the planet, and they are interested in doing a lot of computation and they want to scale up their computing infrastructure, it’s natural that they would want to use our land for that.
If you believe there is even a small chance of that happening. Now is the time to use the power of your mobile phones to demand responsible, transparent Ai and to remove profit seeking algorithms.
Each day, we hear about countless instances of greed, hatred, violence, and destruction, and all of the pain, suffering, and sorrow that ensues, while we remain deaf to what is really happing in the world of technology.
With the never-ending list of atrocities, it may seem fruitless to try to identify a single contributing factor to all of society’s collective dilemmas, but it is becoming more and more apparent that AI in the hands of a few global mega companies is a recipe for DIASTER.
—————-
Ever since humans picked up a rock and hurled it at another human or animal technology has been shaping the world for yonks’, unfortunately both for good and bad.
DOWN THE CENTURIES ALL OF THESE ADVANCES WERE INCAPABLE OF EFFECTING CHANGE WITHOUT HUMAN ASSISTANCE AND THEIR DECISIONS. Not any longer.
The AI technology we are witnessing today is the first to make decisions without human supervision’s so the future doesn’t look so bright in terms of keeping the planet in peace, as it will lead to a brainwashed society with no values and no real purpose to evolve, other than being herded by an AI sheep dog into predictions.
——————–
AI’s impact in the next five years?
Human life will speed up, behaviours will change and industries will be transformed — and that’s what can be predicted with certainty. AI will rattle society at large.
A threshold will be crossed.
Thinking machines will have left the realm of sci-fi and entered the real world with Human-AI teaming.
We can already see this happening voluntarily in use cases such as algorithmic trading in the finance industry, outpacing the quickest human brains by many orders of magnitude.
Society will also see its ethical commitments tested by powerful AI systems, especially privacy.
As the cost of peering deeply into our personal data drops and more powerful algorithms capable of assessing massive amounts of data become more widespread, we will probably find that it was a technological barrier more than an ethical commitment that led society to enshrine privacy.
AI technologies that are being empowered to code themselves through new generative AI capabilities and simultaneously having less human oversight.
We all must slow down and take steps to bring about more trustworthy technology, but we won’t be able to build trustworthy AI systems unless we know what trustworthy AI means to us.
It is imperative that all AI describe its purpose, rationale and decision-making process in a way that the average person can understand. In other words fairness, accountability and transparency -algorithmic accountability.
AI is the bedrock of world-impacting systems.
———————-
At the micro level, AI affects individuals in everything from landing a job to retirement planning, securing home loans, job and driver safety, health diagnoses and treatment coverage, arrests and police treatment, political propaganda and fake news, conspiracy theories, and even our children’s mental health and online safety.
Without having proper insight into how the AI is making its decisions. Developers should pay close attention to the training data to ensure it doesn’t have any bias, stating from where the information came.
If the data is biased, then developers should explore what can be done to mitigate it. In addition, any irrelevant data should be excluded from training.
——————-
The list of “screwed up” things is a bit overwhelming to comprehend, because there are so many problems affecting so many different people, places, and things.
When you hear politics speak they always mention stuff like health care, transportation, city infrastructure, human rights, free markets. Even though these things are of importance, they don’t set a path for others to follow in the long term.
In all of these instances, both today and throughout history, the underlying reason one group of people has chosen to exploit, oppress, and harm another group of people, has been because of an exaggerated emphasis on their differences rather then what is common to us all – life.
Shouldn’t there be a greater purpose?
What we have are governments focusing on quick fixes and band-aid solutions which don’t address the real problems we’re experiencing as a species.
In an automated world where fun and feeling good are a click away, people can hardly focus on one task.
We grow up demanding to feel good all the time and careless of everything else.
A well-defined message which all presidents/ governments, pass along to their nations and heirs with the intention of making the world a better place to live in is now more than a peroxidative ( self- propagating chain reaction) if we are to avoid a despot future – Climate Change – AI – Wars.
Social media is feeding our false self. Our phones are our best friends. It’s tragic.
People grow up hating education and never building a habit of learning by creating a false self, through filtered images and phony statuses and eventually they start believing in their own shit more than they should.
Unfortunately, their real self remains weak and lacks the qualities it actually needs to handle the hurdles of life.
We are already losing the ability to interact with one another, this is honestly the next step in the evolution of humans and it is absolutely terrifying.
I’d say that it’s not the world that’s fucked up, it’s people who are fucked up. People have become so materialistic, impatient, self-centred, and greedy that they are easy prey for exploitation.
Fortunately, there is a way out.
Humanity has the potential to change, but only with a conscious collective effort.
If you want to make a change, start caring more about others.
Google it. They know everything.
Will the world get a grip?
For humanity to grab on to life and live it to the fullest we must demand transparency when it comes to technologies such as Algorithms.
So, now ask yourself do you want to become a product or service or live your life with your own identity.
Ask yourself do you want to “meander” through life, wandering aimlessly, as the term is commonly (mis)understood to mean to this very day.
Teenagers aren’t stupid. They can sense that what’s being taught in school is hardly something they can later use in real life. Not like us, the generation that can’t find the grocery store without using the navigation on their smartphone.
No matter what you’re doing everything is more complicated than you think.
You only see a tenth of what is true. There are a million little strings attached to every choice you make; you can destroy your life every time you choose.
Governments’ plans to limit climate change to internationally agreed safer levels will currently not limit global warming enough. Governments must not only agree what stronger climate actions will be taken but also start showing exactly how to deliver the changes.
——————
We all get sucked into the day-to-day, lose focus, or just get bored.
We’ve got to remove that bolt, so get a grip on the wrench and turn it as hard as you can!
Don’t be fooled.
AI’s impact in the next five years?
Human life will speed up, behaviours will change and industries will be transformed — and that’s what can be predicted with certainty.
Significant AI advances significant have only just started to rattle society at large.
Governments will be compelled to implement AI in the decision-making processes and in their public- and consumer-facing activities. AI will allow these organizations to make most of the decisions much more quickly. As a result, we will all feel life speeding up.
Society will also see its ethical commitments tested by powerful AI systems, especially privacy.
Presently all across the planet, governments at every level, local to national to transnational, are seeking to regulate the deployment of AI.
But dramatic depictions of artificial intelligence as an existential threat to humans, are buried deep in our collective psyche.
Arguably the most realistic form of this AI anxiety is a fear of human societies losing control to AI-enabled systems. We can already see this happening voluntarily in use cases such as algorithmic trading in the finance industry. The whole point of such implementations is to exploit the capacities of synthetic minds to operate at speeds that outpace the quickest human brains by many orders of magnitude.
The more likely long-term risk of AI anxiety in the present is missed opportunities.
To the extent that organizations in this moment might take these claims seriously and underinvest based on those fears, human societies will miss out on significant efficiency gains, potential innovations that flow from human-AI teaming, and possibly even new forms of technological innovation, scientific knowledge production and other modes of societal innovation that powerful AI systems can indirectly catalyse.
While Western eyes are fixed on Tehran and Tel Aviv, Ukraine’s frontlines, unless we get a grip fast, we will not be going anywhere.
So AI is scary and poses huge risks.
But what makes it different from other powerful, emerging technologies like biotechnology, which could trigger terrible pandemics, or nuclear weapons, which could destroy the world?
No one holds the secret to our ultimate destiny.
AI is dangerous precisely because the day could come when it is no longer in our control at all.
Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them. … There would be plenty to do in trying, say, to keep one’s intelligence up to the standard set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control.
I think it’s going to be the most beneficial thing ever to humanity, things like curing diseases, helping with climate, all of this stuff. But it’s a dual-use technology — it depends on how, as a society, we decide to deploy it — and what we use it for.
It’s worth pausing on that for a moment. Nearly half of the smartest people working on AI believe there is a 1 in 10 chance or greater that their life’s work could end up contributing to the annihilation of humanity.
As the potential of AI grows, the perils are becoming much harder to ignore.
AI safety faced the difficulty of being a research field about a far-off problem, NOT ANY MORE The challenge is here, and it’s just not clear if we’ll solve it in time.
You may face unexpected challenges. We all do. Changing your mindset won’t guarantee that everything will be okay. But it will give you the insight and strength to believe that you will be okay and that you can handle what life dishes up.
But I guarantee if you don’t do anything you will regret it, and you will wake up one day wondering where your life went and how you got to the place you are. As AI evolves, the consequences for the economy, national security, and other vital parts of our lives will be enormous, along with many other questions as yet unforeseen legal, ethical, and cultural questions will be to arise across all kinds of military, medical, educational, and manufacturing uses.
Open AI, Google, Microsoft, and Anthropic, are not constrained by guardrails and their financial incentives are not aligned with human values. AI-enabled wars already happing, combined with Climate change.
Believe me it’s an unsolved problem, mistakenly believed that the inability to gain access to vast datasets is what’s kept AI out of the hands of all, but a few companies.
In a world full of false material that’s promulgated by AI, there will be lots of AI that can detect the false stuff. We will start to build economies around the whack-a-mole problem of the Good Guys AI staying slightly ahead of the Bad Guys most of the time — but not always.
And some people will make some real money doing this.
All human comments appreciated. All like clicks and abuse chucked in the bin,
Contact: bobdillon33@gmail.com

