( Five minute read)
Yes is the answer.
Right now, the state of the safety field is far behind the soaring investment in making AI systems more powerful, more capable, and more dangerous.
Using artificial intelligence (AI) technology to replace human decision-making will inevitably create new risks whose consequences are unforeseeable.
The more you put in, the more you get out.
That’s what drives the breathless energy that pervades so much of AI right now.
Consequences of these capabilities and systems–both intended and unintended–are significant, and growth in sensing technology will have far-reaching implications for our social norms and systems.
Data gathering is not inherently negative, it’s a matter of how transparent companies are in gathering information and the choices they make about how the data is used.
Because of the growing ubiquity of algorithms in society which are raising a number of fundamental questions concerning governance of data, transparency of algorithms, legal and ethical frameworks for automated algorithmic decision-making and the societal impacts of algorithmic automation itself we are now in a rush to regulate ( in ignorance) of their impact, which current law and regulation cannot deal with adequately. 
However AI technology can provide sufficient transparency in explaining how AI decisions are made.
Transparency ex post can often be achieved through retrospective analysis of the technology’s operations, and will be sufficient if the main goal is to compensate victims of incorrect decisions.
Ex ante transparency is more challenging, and can limit the use of some AI technologies such as neural networks. It should only be demanded by regulation where the AI presents risks to fundamental rights, or where society needs reassuring that the technology can safely be used.
One thing we’re definitely not doing:
Understanding them better, and as we develop more powerful systems, that fact will go from an academic puzzle to a huge, existential question. If anything, as the systems get bigger, interpretability — the work of understanding what’s going on inside AI models, and making sure they’re pursuing our goals rather than their own — gets harder.
We’re now at the point where powerful AI systems can be genuinely scary to interact with.
Ai poses some wider concerns including data monopolies, the challenge to democracy, public participation and maintaining the public interest. Given the speed of development in the field, it’s long past time to move beyond a reactive mode, one where we only address AI’s downsides once they’re clear and present.
There is enormous opportunity for positive social impact from the rise of algorithms and machine learning. But this requires a licence to operate from the public, based on trustworthiness.
The very concept of fairness as an ethical value has not yet been sufficiently explored. Any regulations should ensure that systems adhering to them, are safe beyond a reasonable doubt. However, there is currently no specific regulation on AI and algorithmic decision-making in place.
Decisions concerning AI at a societal level should not be in the hands of “unelected tech leaders”.
We can’t only think about today’s systems, but where the entire enterprise is headed.
Most AI systems to day are black box models, which are systems that are viewed only in terms of their inputs and outputs. Scientists do not attempt to decipher the “black box,” or the opaque processes that the system undertakes, as long as they receive the outputs they are looking for.
With a Quantum self learning systems it would be possible to build brains that could reproduce themselves on an assembly line and which would be conscious of their existence.
———————–
This particular mad science might kill us all.
Here’s why.
At present this Ai — called deep learning — started significantly outperforming other approaches to computer vision, language, translation, prediction, generation, and countless other issues.
The shift is about as subtle as the asteroid that wiped out the dinosaurs, as neural network-based AI systems that smashed every other competing technique on everything from computer vision to translation to chess.
No one has yet discovered the limits of this principle, even though major tech companies now regularly do eye-popping multimillion-dollar training runs for their systems.
It’s not simply what they can do, but where they’re going.
With deep learning, improving systems doesn’t necessarily involve or require understanding what they’re doing. Often, a small tweak will improve performance substantially, but the engineers designing the systems don’t know why.
Intelligent agency is an extremely powerful force, and creating agents much more intelligent than us is playing with fire — especially given that if their objectives are problematic, such agents would plausibly have instrumental incentives to seek power over humans. We can’t pinpoint the exact reasons for our preferences, emotions, and desires at any given moment.
Current language models remain limited.
They lack “common sense” in many domains, still make basic mistakes about the world a child wouldn’t make, and will assert false things unhesitatingly. But the fact that they’re limited at the moment is no reason to be reassured.
As hard as that will likely prove, getting AI systems to behave themselves outwardly may be much easier than getting them to actually pursue our goals and not lie to us about their capabilities and intentions.
What makes it different from other powerful, emerging technologies like biotechnology, which could trigger terrible pandemics, or nuclear weapons, which could destroy the world?
The difference is that these tools, as destructive as they can be, are largely within our control.
If they cause catastrophe, it will be because we deliberately chose to use them, or failed to prevent their misuse by malign or careless human beings.
But AI is dangerous precisely because the day could come when it is no longer in our control at all. The result will be highly-capable, non-human agents actively working to gain and maintain power over their environment —agents in an adversarial relationship with humans who don’t want them to succeed.
Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them. … There would be plenty to do in trying, say, to keep one’s intelligence up to the standard set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control.
So a powerful AI system that is trying to do something, while having goals that aren’t precisely the goals we intended it to have, may do that something in a manner that is unfathomably destructive. This is not because it hates humans and wants us to die, but because it didn’t care and was willing to, say, poison the entire atmosphere, or unleash a plague, if that happened to be the best way to do the things it was trying to do.
But while divides remain over what to expect from AI — and even many leading experts are highly uncertain — there’s a growing consensus that things could go really, really badly.
It’s worth pausing on that for a moment.
Nearly half of the smartest people working on AI believe there is a 1 in 10 chance or greater that their life’s work could end up contributing to the annihilation of humanity.
It’s not legal for a tech company to build a nuclear weapon on its own. But private companies are building systems that they themselves acknowledge will likely become much more dangerous than nuclear weapons.
For me, the moment of realization — that this is something different, this is unlike emerging technologies we’ve seen before — came from talking with GPT-3, telling it to answer the questions as an extremely intelligent and thoughtful person, and watching its responses immediately improve in quality.

The challenges are here, and it’s just not clear if we’ll solve them in time.
One only has to look at the above photo. A “wake-up call”
Speed is really important here.
“I don’t think ever in the history of human endeavour has there been as fundamental potential technological change as is presented by artificial intelligence,” Biden said at a news conference earlier this month. “It is staggering. It is staggering.” He does a lot of that.
If one acts too slowly, we are going to be behind by the time to take action, and any actions are going to be leapfrogged by the technology.
“My administration is committed to safeguarding Americans’ rights and safety while protecting privacy, to addressing bias and misinformation, to making sure AI systems are safe before they are released,”
This is Hog wash.
If government’s don’t step in, who will fill their place? Ai of course.
Even if these narrower issues are solved, all political contexts run the risk of unlawfully exploiting AI surveillance technology to obtain certain political objectives.
All countries with a population of at least 250,000 are using some form of AI surveillance systems to monitor their citizens. “Some autocratic governments – for example, China, Russia, Saudi Arabia – are exploiting AI technology for mass surveillance purposes.
One way of looking at the issue is not simply to focus on the surveillance technology, but “the export of authoritarianism.
One way to try to ensure continued political survival is to look to technology to enact repressive policies, and suppress the population from expressing things that would challenge a state.
AI will be the key to military superiority, investing in AI is a way to ensure and maintain dominance and power in the future.
There are plenty of problems with surveillance, but it may also be a fact of life going forward—and something people will need to get used to. Within a world where your data is everywhere, devices listen to your words, cameras monitor your face and GPS systems know your whereabouts, ubiquitous organizational tracking may be inevitable.
But like so many things, it’s not the what, it’s the how.
If tracking is occurring as a gotcha strategy—in which the goal is to catch people misbehaving or punish them—the relationships with employees and the culture will pay steep prices.
Ultimately, we need to do what’s right—not just what’s possible—by using our values as a guide, the use of technologies.
All human comments appreciated. All like clicks and abuse chucked in the bin.
Contact : bobdillon33@gmail.com