Tags
Age of Uncertainty, AI, AI regulations, AI systems., Algorithms., Technology, The Future of Mankind, Visions of the future.
( Three minute read)
Artificial intelligence is already suffering from three key issues: privacy, bias and discrimination, which if left unchecked can start infringing on – and ultimately take control of – people’s lives.
As digital technology became integral to the capitalist market dystopia of the first decades of the 21st century, it not only refashioned our ways of communicating but of working and consuming, indeed ways of living.
Then along came the the Covid-19 pandemic which revealed not only the lack of investment, planning and preparation that underlay the scandalous slowness of the responses by states around the world, but also grotesque class and racial inequalities as it coursed its way through the population and the owners of high-tech corporations were enriched by tens of billions. 
It’s already too late to get ahead of this generative AI freight train.
The growing use of AI has already transformed the way the global economy works.
In this backdrop, AI can be used to profile people like you and me to such a detail which may well become more than uncomfortable! And this is no exaggeration.
This is just a tip of the iceberg!
So what if anything can be done to ensure responsible and ethical practices in the field.
Concern over AI development has accelerated in recent months following the launch of OpenAI’s ChatGPT last year, which sparked the release of similar chatbots by other companies, including Google, Snap and TikTok. The growing realization that vast numbers of people can be fooled by the content chatbots gleefully spit out, now the clock is ticking to not just the collapse of values that enshrine human life but the very existence of the human race.
“This is not the future we want.”
Now there is no option but to put in place international laws, not mandatory regulations, before AI is infringing human rights. However as we are witnessing with climate change, to achieve any global cooperation is a bit of a problem.
From the climate crisis to our suicidal war on nature and the collapse of biodiversity, our global response is too little, too late. Technology is moving ahead without guard rails to protect us from its unforeseen consequences.
So we have two contrasting futures one of breakdown and perpetual crisis, and another in which there is a breakthrough, to a greener, safer future. This approach would herald a new era for multilateralism, in which countries work together to solve global problems.
In order to achieve these aims, the Secretary-General of the United nations recommends a Summit of the Future, which would “forge a new global consensus on what our future should look like, and how we can secure it”. The need for international co-operation beyond borders is something that makes a lot of sense, especially these days, because the role of the modern corporation in influencing the impact of AI is in conflict with the common values needed to survive.
The principle of working together, recognizing that we are bound to each other and that no community or country, however powerful, can solve its challenges alone.” Any national government is, of course, guided by its own set of localised values and realities.
But geopolitics, I would argue, always underlies any ambition. The immaturity of the ‘Geopolitics of AI’ field leaves the picture incomplete and unclear so it requires the introduction of agreed international common laws.
Let Ireland hold such a Summit.
This summit could coordinate efforts to bring about inclusive and sustainable policies that enable countries to offer basic services and social protection to their citizens with universal laws that defines the several capabilities of AI i.e. identify the ones that are more susceptible to misuse than the others.
(It is incredibly important for understanding the current environment in which any product is built or research conducted and it will be critical to forging a path forwards and towards safe and beneficial AI.)
The challenges are great, and the lessons of the past cannot be simply superimposed onto the present.
For example.
The designers of AI technologies should satisfy legal requirements for safety, accuracy and efficacy for well-defined use cases or indications. In the context of health care, this means that humans should remain in control of health-care systems and medical decisions; privacy and confidentiality should be protected, and patients must give valid informed consent through appropriate legal frameworks for data protection.
Another For example the collection of Data which is the backbone of AI.
Transparency requires that sufficient information be published or documented before the design or deployment of an AI technology. Such information must be easily accessible and facilitate meaningful public consultation and debate on how the technology is designed and how it should or should not be used.
It is the responsibility of stakeholders to ensure that they are used under appropriate conditions and by appropriately trained people. Effective mechanisms should be available for questioning and for redress for individuals and groups that are adversely affected by decisions based on algorithms.
Laws to ensure that AI systems be designed to minimize their environmental consequences and increase energy efficiency.
If we want the elimination of black-box approach through mandatory explain ability for AI – Agreed or not agree should not be an option.
While AI can be extraordinarily useful it is already out of control with self learning algorithms that no one can understand or to be brought to account.
These profit seeking skewed algorithms owned by corporations are causing racial and gender-based discrimination.:format(webp)/https://www.thestar.com/content/dam/thestar/business/2023/05/15/the-metaverse-was-supposed-to-be-the-future-of-the-internet-is-that-dream-now-dead/metaverse_dreamstime.jpg)
I firmly believe that the Government must engage in meaningful dialogues with other countries on a common international laws that are now needed to subject developers to a rigorous evaluation process, and to ensure that entities using the technology act responsibly and are held accountable.
Having said that, governments must keep their roles limited and not assume absolute powers.
Multiple actors are jostling to lead the regulation of AI.
The question business leaders should be focused on at this moment, however, is not how or even when AI will be regulated, but by whom.
Governments have historically had trouble attracting the kind of technical expertise required even to define the kinds of new harms LLMs and other AI applications may cause.
Perhaps a licensing framework is needed to strike a balance between unlocking the potential of AI and addressing potential risks.
Or
AI ‘Nutrition Labels’ that would explain exactly what went into training an AI, and which would help us understand what a generative AI produces and why.
Or
Take the Meta’s open source approach which contrasts sharply with the more cautious, secretive inclinations of OpenAI and Google. With Open Source models like this and Stable Diffusion already out there, it may be impossible to get the Genie back into the bottle.
The metaverse is not well understood or appreciated by the media and the public. The metaverse is much, much bigger than one company, and weaving them together only complicates the matter.
Governments should never again face a choice between serving their people or servicing their debt.
Still, the most promising way not to provoke the sorcerer would be to avoid making too big a mess in the first place.
All human comments appreciated. All like clicks and abuse chucked in the bin
Contact: bobdillon33@gmail.com





AI systems must do what we want them to do.