( Twenty-minute read)
This post has many contradictions, as I am delving into an area with so many unknowns that are developing as we read.
You could say that there many more pressing problems in the world than technological development which will always be far beyond our ability to respond to it in any democratic manner. 
If we are to place our trust in artificial intelligence, it is going to require a high degree of transparency.
As citizens, we must know how and in which context our data is used, and we must feel confident that data storage is carried out in a safe and secure manner.
We should also have insight into the basis on which artificial intelligence acts, so that we may better understand the implications and dilemmas we will have to relate to in the future. Here, it is crucial that we handle the ethical dilemmas jointly – and contribute to the creation of the common framework for a world not owned by Apple. Microsoft etc.
But how do we create a wide interest in contributing?
How do we ensure that it is not just the technologically initiated who create the framework on behalf of society as a whole?
The next century beginning on January 1, 2101.
It might seem miles away and most if not all of us will have departed this world, long before it arrives, however – if we want Liberal democracy to survive or for that matter, the earth itself we need to put aside our smartphones and start defending our common values.
To do this it is important to remember the past and to keep it in mind so that as individuals and as a society we can grow and flourish.
As Emersons said:
“Society is a joint stock company in which the members agree for the better securing of his bread to each shareholder to surrender the liberty and culture of the eater. ”
The current age with its AI technology is far from achieving this rather with Machine learning and Data mining and algorithms it is just the beginning of undermining our own social foundation.
The problem is the opacity of the power of the algorithms, which means that it isn’t easy to determine when algorithmic governance stops serving the common good and instead becomes the servant of the powers that are creating a parallel form of governing alongside the more familiar tools of legislation and policy- setting.
In the coming years, vast fields of human life will be governed by digital code both invisible and unintelligible to human beings with significant political power placed beyond individual resistance and legal challenge.
Soon it will not be easy to determine when algorithmic governance stops serving the common good and instead becomes the servant of greed and inequality.
Once we all have digital ID numbers, it will become impossible to challenge one’s designation.
We are starting to see the use algorithms not only in the assisting of the election of idiots like D Trump but we are allowing Social media platforms to rip apart the institutions that are supposed to stabilise our political volatile world.
Why is this happing? Because our current democratic world is not working.
It seems unwilling to deal with the problems facing earth while its citizens are being gerrymandered by technology into populist short-term thinking.
As we watch the decline of mainstream parties the role of money in politics that once shaped government is no longer effective. For the last few decades, we see countries driven by growth at all costs with parties and governments responsive primarily to elites or narrow groups of voters rather than broad cross-sections of the population.
If we stopped and properly analyzed that past we would realize that our economy was strongest not when untethered free market capitalism was free to reign but when our government had pushed for massive social reforms which “artificially” (as some would say) supported the lower and middle class.
It was this, not the free market which allowed for Capitalism for profit to reign supreme in the past and if we are to ignore that then we can never hope to move forwards for we will forever be stuck solving the problems of the past not to mention the future.
The result is that citizens feel disregarded and disempowered with little or no respect for politicians that show a tumbling and marked deterioration in their capacity to inspire or the power they can exert in a shrinking sphere of influence due to social media.
I say: by ignoring the past we pass up valuable opportunities to learn more about what should be done to solve problems now.
This is the basis for historic achievements such as human rights and the rule of law, however, we on the threshold of not be able to reconcile these rights with the revolution promised by the fourth Industrial revolution.
Due to lack of access to data and any world regulations as to what can be done with data, there is a high probability that data collection collected on one pretext will be used entirely for a different purpose.
Take Denmark which is now distributing benefits by using algorithms that are undermining its democracy. They don’t fully appreciate the risks involved in enhancing the welfare state through AI applications.
Liberalism is the premise of the belief that coercive powers of public authorities should use in service of individuals freedom and that they should be constrained by laws controlling their scope, limits, and discretion.
The best way to predict the future is to invent it.
Therefore, new systemic set-ups are required that can support the agility needed in a digital age.
The fourth industrial revolution does not stop just because we are not ready
to support it.
We must instead get ready. Get ready for a time of driverless cars and artificial intelligence that complements us as human beings, and augmented reality that connects the digital world with our physical one.
But actual legislation is difficult to imagine at the present time because we
simply cannot regulate something of which we do not know the extent… The fear is that we are doing something wrong because the market is so volatile and immature.
So for the moment instead of legislation, we should be putting in place policy frameworks and certifications as a means of regulating the area:
Accountability is a basic aspect when working on new technology of which we do not yet know the extent, the consequences or the full potential.
Accountability for technological development implies that we discuss solutions,
opportunities and engage in the conflicts and disagreements that will naturally follow in the aftermath – even if we do not know the destination of our train.
Others emphasize the fact that the accountability consists of people having control of the technology, and technology acts on the data fed to it. In other words, people are very much responsible for data being of the right quality to avoid so-called bias (distortions) in data and, thus, in the recommendations that artificial intelligence may contribute in what potentials may be released and of what challenges we should be aware of.
Thus, the goal has not been to identify a final result or a single truth that everyone may rally around.
Because the truth is that there are many attitudes toward artificial intelligence.
From how the area should be anchored politically to how to ensure that everyone enjoys the benefits of the technological development and what barriers may exist to this development.
From how the savings arising from increased automation and increased use of artificial intelligence are used to create value for the citizens:
From how to quickly decide on specific projects and ensuring rapid implementation?
Although EU legislation may be relevant, technology is a cross-border issue so international guidelines are equally important as many global companies are located in the US and China.
Finally, we have the problem of engagement.
None of us like our forefathers and all that came before them have any idea what the world is going to be like in the future but addictive technologies that have captured the attention and mind space of the youngest generation will formulate its foundations.
The long-term effects of children growing up with screen time are not well understood but early signs are not encouraging: poor attention spans, anxiety, depression and lack of in-person social connections are some of the correlations already seen, as well as the small number of teens who become addicts and non-functioning adults.
All in all, digital life is now threatening our psychological, economic and political well-being. People’s cognitive capabilities will be challenged in multiple ways, including their capacity for analytical thinking, memory, creativity, reflection, and mental resilience.
The digital divide will become worse, and many will be unable to pay for all the conveniences. Convenience will be chosen over freedom. Perhaps.
The more the culture equates knowledge with data and social life with social media, the less time is spent on the path of wisdom, a path that always requires a good quotient of self-awareness.
We’ve reached a phase in which men (always men) believe that technology can solve all of our social problems. Increasingly social media is continuing to reduce people’s real communication skills and working knowledge. Major industries – energy, religion, environment, etc., are rotting from lack of new leadership.
Some of these technologies are already operating without a person’s knowledge or consent. People cannot opt out, advocate for themselves, or fix errors about themselves in proprietary algorithms.
So the platforms will necessarily compromise humanity, democracy and other essential values. The larger the companies grow, the more desperate and extractive they will have to become to grow still further. Facebook and Twitter have become heavily ingrained in the process of democracy their digital footprint is not limited to a readership or viewing area.
We will see a reduction of engagement with and caring for the environment as a result of increased interaction with online and digital devices.
The society-wide effects of ‘continuous partial attention’ and the tracking, analysis and corruption of the use of data trails are only beginning to be realized. Without tenacity, self-control and some modicum of intelligence about the agenda of social media, the interruption generation will miss out on the greatness that could be theirs.
Digital life will take people’s privacy and influence their opinions. People will be fed news and targeted information that they will believe since they will not access the information needed to make up their own minds.
Out of convenience, people will accept limitations of privacy and narrowed information resources. Countries or political entities will be the influencers of certain groups of people. People will become more divided, more paranoid as they eventually understand that they have no privacy and need to be careful of what they say, even in their own homes.
Understanding well-being in terms of human flourishing – which includes among other things the exercise of autonomous agency and the quality of human relationships – it seems to clear to me that the ongoing structuring of our lives by digital technologies will only continue to harm human well-being.
This is a psychological claim, as well as a moral one. Unless we are able to regulate our digital environments politically and personally, it is likely that our mental and moral health will be harmed by the agency-undermining, disempowering, individuality-threatening and exploitative effects of the late-capitalistic system marked by the attention-extracting global digital communication firms.
You see it everywhere. People with their heads down, more comfortable engaging with a miniature world-in-a-box than with the people around them.
At the same time, increasingly sophisticated technology for emotion and response manipulation is being developed. This includes devices such as Alexa and other virtual assistants designed to be seen as friends and confidants. Alexa is an Amazon interface – owned and controlled by a giant retailer: she’s designed, ultimately, to encourage you to shop, not to enhance your sense of well-being.
It remains to be seen whether any of the promises made by digital technology companies will be beneficial to mankind other than profit for profit sake. The ethics of software development and the idea that technology should be designed to enhance people’s well-being are both principles that should be stressed as part of any education in software design.
Proponents of an elusive work-life balance may argue that you can always switch off digital technology, the reality is that it is not being switched off – not because it cannot, but there is now a socio-cultural expectation to be always available and responding in real-time.
What we are seeing now becoming reality are the risks and uncertainties that we have allowed to emerge at the fringes of innovation.
The technological path we’re on and how to evaluate techno-social engineering of humans has to be challenged NOW not in the future.
Technology will be needed if we are to develop beyond a one plant species.
Conditions of modern life could be driving changes in the makeup of our genes. Our bodies and our brains may not be the same as those of our descendants.
Technology may well put an end to the brutal logic of natural selection with evolution becoming purely cultural.
This gives us good grounds for thinking that evolution (whether biological, memetic or technological) will continue to lead in desirable directions.
There is no genetic or evolutionary reason that we could not still be around to watch the sun die. Unlike ageing, extinction does not appear to be genetically programmed into any species.
Meanwhile there is gradual progress in neuroscience and artificial intelligence, and eventually, it will become possible to isolate individual cognitive modules and connect them up to modules from other uploaded minds…
Modules that conform to a common standard would be better able to communicate and cooperate with other modules and would, therefore, be economically more productive, creating pressure for standardization…
I think the next decade will be one of retrenchment and adjustment, while society sorts out how to deal with our perhaps over-optimistic construction of the digital experience.
The addictive nature of social media means the dis-benefits could be profound.
There is a reason the iPhone was initially called a ‘crack-phone.
There might be no niche for mental architectures of humankind.

All human comments appreciated. All abuse and like clicks chucked in the bin.
Like this:
Like Loading...