, , , , , , , , ,

(Ten-minute read) 


We cannot come back from Extinction.

BECAUSE  The world we now have is a tragic begging world. 


Because humanity is entering a new age, where we are faced with not only existential risks from our natural environment but by those of our own creation. 

Two hundred years ago we didn’t understand the basic causes of a Pandemic. 

Because the world is now so interconnected (with a vastly increased population than any time in our history) pandemics have more opportunities not only to spread new diseases but to originate new sources.

Because we are reaching “The Precipice”  a time where we’ve reached the ability to pose an existential risk to ourselves, which is substantially bigger than the natural risks, the background that we were facing before.  

Because we need to understand our vulnerability and determine what steps must be taken to end this pandemic. 

Because just a few years ago more than one bomb was created.

One in the hands of a few decision-makers that could destroy civilization the other the Internet, not in the hands  

The current bombs are created by our continuing abuse of the planet we live on. They are not in the hands of deranged dictators or impeached presidents but in the hands of a system of exploitation called capitalism now becoming more and more driven by profit-seeking algorithms.   

Because we think everyone’s interests matter equally, approaches to improving the world should be evaluated mainly in terms of their potential for long-term impact.

Because there are vast differences between the effectiveness of working on different global problems we need better global prioritization.

Because as technology improves and the world economy grows or shrinks, it is getting easier to cause destruction on an ever-larger scale.

(New transformative technologies may promise a radically better future but also pose catastrophic risks.)

Because Covid-19 is showing us that there are many available avenues for improving the world, in terms of social value.

Because now there is much uncertainty around the value of specific options.

Because not enough cause prioritization is done, from the perspective of total social welfare.

The reasons are endless.

However to build the new field of ‘global priorities’ and to try to work out which global problems are most pressing and make progress on foundational questions about how best to address them we need to start understanding that just as the fate of gorillas currently depends on the actions of humans, the fate of humanity may come to depend more on the actions of machines than our own.

Rapid progress in machine learning has raised the prospect that algorithms will one day be able to do most or all of the mental tasks currently performed by humans. This could ultimately lead to machines that are much better at these tasks than humans.

(New transformative technologies may promise a radically better future but they also pose catastrophic risks. How one might design a highly intelligent machine to pursue realistic human goals safely is very poorly understood.)

Comparing global problems involves lots of uncertainty and difficult judgment calls.

If AI research continues to advance without enough work going into the research problem of controlling such machines, catastrophic accidents are much more likely to occur. Nonetheless, work on mitigating many risks remains remarkably neglected. 

Prioritization research appears to be in its infancy. With common prioritize interventions almost zero it remains impossible to assume some common world values.

Why is this? 

Once again the reasons are numerous – but they can be encapsulated in the word inequality, in all its forms.  

Causes have to be made before developing a deep understanding of an area that required prioritization – climate change for example.

Covid-19 is shining a light on the need to develop world health versus biological research – which received very little attention up to now. 

While billions are spent making AI more powerful, there are fewer than 100 people in the world working on how to make AI safe. 

AI could lead to extremely positive developments, presenting solutions to now-intractable global problems, but they also pose severe risks.

Humanity’s superior intelligence is pretty much the sole reason that it is the dominant species on the planet. If machines surpass humans in intelligence, then common beliefs could easily originate from websites.

This is why it is necessary that we create a list ranking of which problems we think are most pressing for people.  That promotes civilizational resilience, mitigating great power conflict, or laying the foundations for the governance of outer space. 

If you have read this post you could not be blamed for thinking that the idea of creating a new list of global problems is worthless. BUT ALL THE TALK, AND AGREEMENTS IN THE WORLD ARE WORTHLESS UNLESS THEY ARE FUNDED TO BECOME PROACTIVE.  

( See previous posts recreating such a fund – 0.005% world Aid Fund, before the current third revolution of Artificial Intelligence, invents the machine with intelligence that far surpasses our own.)

All human comments appreciated. All like clicks and abuse chucked in the bin.