Tags
Algorithms., Artificial Intelligence., Capitalism vs. the Climate., Climate change, Technology, The Future of Mankind, Visions of the future.
( Four minute read)
This year, the world got a rude awakening to the insane power of AI when OpenAI unleashed ChatGPT4 onto the world. This AI text generator/chatbot seemed to be able to replicate human-generated content so well that even AI detection software struggled to tell the difference between the two.
This is not an alien invasion of intelligent machines; it’s the result of our own efforts to make our infrastructure and our way of life more intelligent.
It’s part of human endeavour. We merge with our machines. Ultimately, they will extend who we are.
Our mobile phone, for example, makes us more intelligent and able to communicate with each other. It’s really part of us already. It might not be literally connected to you, but nobody leaves home without one.
It’s like half your brain.
Thinking of AI as a futuristic tool that will lead to immeasurable good or harm is a distraction from the ways we are using it now.
How do we ensure that the AI we build, which might very well be significantly smarter than any person who has ever lived, is aligned with the interests of its creators and of the human race?
What if at some point in the near future, computer scientists build an AI that passes a threshold of superintelligence and can build other super intelligent AI.
An unaligned super intelligent AI could be quite a problem.
For example, we’ve been predicting for decades that AI will replace radiologists, but machine learning for radiology is still a complement for doctors rather than a replacement. Let’s hope this is a sign of AI’s relationship to the rest of humanity—that it will serve willingly as the ship’s first mate rather than play the part of the fateful iceberg.
No laws can prevent China ~ Russia ~ Terrorist network~ Rogue psychopath from developing the most manipulative and dishonest AI you could possibly imagine.
We can’t trust some speculative future technology to rescue us.
Climate change is already killing people, and many more people are going to die even in a best-case scenario, but we get to decide now just how bad it gets.
Action taken decades from now is much less valuable than action taken soon.
The first role AI can play in climate action is distilling raw data into useful information – taking big datasets, which would take too much time for a human to process, and pulling information out in real time to guide policy or private-sector action.
Everyone wants a silver bullet to solve climate change; unfortunately there isn’t one. But there are lots of ways AI can help fight climate change. While there is no single big thing that AI will do, there are many medium-sized things.

Most movies about AI have an “us versus them” mentality, but that’s really not the case.
Even if one were to stand on the side of curious skepticism, (which feels natural,) we ought to be fairly terrified by this nonzero chance of humanity inventing itself into extinction.
Whereas AI is, for now, pure software blooming inside computers. Someday soon, however, AI might read everything—like, literally every thing, swallowing everything into a black hole and not even god knows what it will be recycled.
Just shovel ever-larger amounts of human-created text into its maw, and wait for wondrous new skills to manifest. With enough data, this approach could perhaps even yield a more fluid intelligence, or a humanlike artificial mind akin to those that haunt nearly all of our mythologies of the future.
On the syllabus at the moment : Is a decent fraction of all the surviving text that we have ever produced.
To codify the philosophy in a set of wise laws and regulations to ensure the good behaviour of our super intelligent AI, like laws to make it illegal, for example, to develop AI systems that manipulate domestic or foreign actors. Is pie in the sky –
In the next decade, autocrats and terrorist networks could be able to cheaply build diabolical AI that can accomplish some of the goals outlined in the Yudkowsky story. (The key issue is not “human-competitive” intelligence (as his open letter puts it); It’s what happens after AI gets to smarter-than-human intelligence.
Key thresholds here may not be obvious.
We definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing.
AT THE MOMENT ALL WE HAVE IS A COPING MECCHANISM.
Like non-proliferation laws for nuclear weaponry that are hard to enforce.
Nuclear weapons require raw material that is scarce and needs expensive refinement. Software is easier, and this technology is improving by the month.

We have years to debate how education ought to change in response to these tools, but something interesting and important is undoubtedly happening.
If we figured out how people are going to share in the wealth that AI unlocks, then I think we could end up in a world where people don’t have to work to eat, and are instead taking on projects because they are meaningful to them.
But where do AI companies get this truly astonishing amount of high-quality data from?
Well, to put it bluntly, they steal it.
But as it stands, the AI boom might be approaching a flashpoint where these models can’t avoid consuming their own output, leading to a gradual decline in their effectiveness. This will only be accelerated as AI-generated content perfuses the internet over the coming years, making it harder and harder to source genuine human-made content.
AI is viewed as a strategic technology to lead us into the future.
So what should be done:
- Many people lack a full understanding of AI and therefore are more likely to view it as a nebulous cloud instead of a powerful driving force that can create a lot of value for society;
- Instead of writing off AI as too complicated for the average person to understand, we should seek to make AI accessible to everyone in society. It shouldn’t be just the scientists and engineers who understand it; through adequate education, communication and collaboration, people will understand the potential value that AI can create for the community.
- We should democratize AI, meaning that the technology should belong to and benefit all of society; and we should be realistic about where we are in AI’s development.
- Most of the achievements we have made are, in fact, based on having a huge amount of (labelled) data, rather than on AI’s ability to be intelligent on its own. Learning in a more natural way, including unsupervised or transfer learning, is still nascent and we are a long way from reaching AI supremacy.
From this point of view, society has only just started its long journey with AI and we are all pretty much starting from the same page. To achieve the next breakthroughs in AI, we need the global community to participate and engage in open collaboration and dialogue.
If this does not happen and happen (sooner than later) it will be AI that will be calling the shots
All human comments appreciated. All like clicks and abuse chucked in the bin.
Contact: bobdillon33@gmail.com
https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/