• About
  • THE BEADY EYE SAY’S : THE EUROPEAN UNION SHOULD THANK ENGLAND FOR ITS IN OR OUT REFERENDUM.

bobdillon33blog

~ Free Thinker.

bobdillon33blog

Tag Archives: Artificial Intelligence.

THE BEADY EYE ASKS: WHO IS GOING TO BE RESPONSIBLE WHEN ARTIFICIAL INTELLIGENCE GOES WRONG.

02 Thursday Mar 2017

Posted by bobdillon33@gmail.com in Artificial Intelligence.

≈ Comments Off on THE BEADY EYE ASKS: WHO IS GOING TO BE RESPONSIBLE WHEN ARTIFICIAL INTELLIGENCE GOES WRONG.

Tags

AI systems., Artificial Intelligence., Computer technology, Current world problems, Machine learning., quantum computing, Robots., Smart machine, The Future of Mankind

 

( Twelve minute read for all programmers, code writers.)

I think most people are worrying about the wrong things when they worry about Robots and AI.Résultat de recherche d'images pour "pictures of legal robots"

However with AI and robotics positioned to impact all areas of society, we are remiss not to set things in motion now to prepare for a very different world in the future.

The danger is not AI itself but rather what people do with the AI. The repercussions of AI technology is going to be profound, limited by biological evolution we will be unable to keep up.

So we were all making a very basic mistake when it comes to ARTIFICIAL INTELLIGENCE.

Like every advance in technology AI has the potential to do amazing things, on the other hand it also has the potential to do dangerous things and there is little that can be done to stop or rectify it once it’s unleashed. For example its use in weaponizing.

(Recently I read where it is now almost possible to physically create a computer made of DNA using DNA molecules. A computer that can be programmed to compute anything any other device can process.

Electronic computers are a form of UTM, but no quantum UTM has yet been built, if built it will outperform all standard computers on significant practical problems. This ‘magical’ property is possible because the computer’s processors are made of DNA rather than silicon chips. All electronic computers have a fixed number of chips.

So what?

As DNA molecules are very small a desktop computer could potentially utilize more processors than all the electronic computers in the world combined – and therefore outperform the world’s current fastest supercomputer, while consuming a tiny fraction of its energy.

It will definitely bring about moral and philosophical issues that we should be concerned about right now.)

Back to today:

It’s no longer what or when Artificial Intelligence will change our lives, but how or what and who is going to be help responsible.

We are at a crossroads. We need to make decisions. We must re-invent our future.

It is the role of AI in future, truly hybrid societies, or socio-cognitive-technical systems, that will be the real game changer.

The real potential of AI includes not only the development of intelligent machines and learning robots, but also how these systems influence our social and even biological habits, leading to new forms of organization, perception and interaction.

In other words, AI will extend and therefore change our minds.

Robots are things we build, and so we can pick their goals and behaviours.  Both buyers and builders ought to pick those goals sensibly, but people who will use and buy AI should know what the risks really are.

Understanding human behaviour may be the greatest benefit of artificial intelligence if it helps us find ways to reduce conflict and live sustainably.

However, knowing fully well what an individual person is likely to do in a particular situation is obviously a very, very great power.  Bad applications of this power include the deliberate addiction of customers to a product or service, skewing vote outcomes through disenfranchising some classes of voters by convincing them their votes don’t matter, and even just old-fashioned stalking.

Machines might learn to predict our every move or purchase, or governments might try to put the blame robots for their own unethical policy decisions.

It’s pretty easy to guess when someone will be somewhere these days.

Robots, Artificial Intelligence programs, machine learning, you name it, all seem to be responsible for themselves.

However increasingly our control of machines and devices is delegated, not direct. That fact needs to be at least sufficiently transparent that we can handle the cases when components of  systems our lives depend on go wrong.

In fact, robots belong to us. People, governments and companies build, own and program robots. Whoever owns and operates a robot should be responsible for what it does.Résultat de recherche d'images pour "pictures of legal robots" AI systems must do what we want them to do.

In humans consciousness and ethics are associated with our morality, but that is because of our evolutionary and cultural history.  In artefacts, moral obligation is not tied by either logical or mechanical necessity to awareness or feelings.  This is one of the reasons we shouldn’t make AI responsible: we can’t punish it in a meaningful way, because good AI systems are designed to be modular, so the “pain” of punishment could always be excised, unlike in nature.

We must get over our over-identification with AI systems and start demanding that all Technologies that is not designed for the betterment of humanity and the world we live in be verify AI safe and companies need to make the AI they are inserting in their products visible.

We need a world Organisation that is totally transparent and accountable to VET all technology to ensure that :

To minimise social disruption and maximise social utility.

  • Robots should not be designed as weapons, except for national security reasons.
  • Robots should be designed and operated to comply with existing law, including privacy.
  • Robots are products: as with other products, they should be designed to be safe and secure.
  • Robots are manufactured artefacts: the illusion of emotions and intent should not be used to exploit vulnerable users.
  • It should be possible to find out who is responsible for any robot.
  • Robots should not be human-like because they will necessarily be owned.
  • Robots do not need to have a gender. We should consider how our technology reflects our expectations of gender. Who are the users, and who gets used?
  • We should not creating a legal status for robots that will dub them as “electronic persons,” implying that machines will have legal rights and obligations to fulfill. This means that robots will have to take responsibility for decisions they make, especially if they have autonomy.
  • We should insist on a kill switch for all robots that would shut down all functions if necessary.
  •  We should have restrictions on robots to ensure they obey all commands unless those commands would force them to physically do harm to humans or themselves through action or inaction.
  • We should not use robots to reason about what it means to be human, calling them “human” dehumanize real people.  Worse, it gives people the excuse to blame robots for their actions, when really anything a robot does is entirely our own responsibility.

There are also ethical issues with AI, but they are all the same issues we have with other artifacts we build and value or rely on, such as fine art or sewage plants.

  • Yesterday, the European Parliament’s legal affairs committee voted to pass a report urging the drafting of a set of regulations to govern the use and creation of robots and AI.
  • legal liability may need to be proportionate to its level of autonomy and “education,” with the owners of robots with longer training periods held more responsible for those robots’ actions.
  • A big part of the responsibility also rests on the designers behind these sophisticated machines, with the report suggesting more careful monitoring and transparency. This can be done by providing access to source codes and registration of machines. The forming an ethics committee, where creators might be required to present their designs before they build them.
  • We should have to have a league of programmers dedicated to opposing the misuse of AI technology to exploit people’s natural emotional empathy.

As AI gets better, these issues have gotten more serious.

So to wrap up this blog :

First, here are many reasons not to be worry. However it is not enough for experts to understand the role of AI in society it is also imperative to communicate this understanding to non-experts.

Secondly, we shouldn’t ever be seen as selling our own data, just leasing it for a particular purpose.

This is the model software companies already use for their products; we should just apply the same legal reasoning to we humans.  Then if we have any reason to suspect our data has been used in a way we didn’t approve, we should be able to prosecute.  That is, the applications of our data should be subject to regulations that protect ordinary citizens from the intrusions of governments, corporations and even friends.

These problems are so hard, they might actually be impossible to solve.

But building and using AI is one way we might figure out some answers. If we have tools to help us think, they might make us smarter. And if we have tools that help us understand how we think, that might help us find ways to be happier.

The idea that robots, being authored by us, will always be owned—is completely bonkers. It is the duty of all of us to make AI researchers ensure that the future impact is beneficial, not making robots into others, but accepting them as part of ourselves – as artefacts of our culture rather than as members of our in group.

Unfortunately, it’s easier to get famous and sell robots if you go around pretending that your robot really needs to be loved, or otherwise really is human – or superhuman!

Just because they are shaped like a human and they’d watched Star Wars, passers-by thought it deserved more ethical consideration than they gave homeless people, who were actually people.

Because we build and own robots, we shouldn’t ever want them to be persons.

I can hear you saying that our society faces many hard problems far more pressing than the advance of Artificial intelligence. AI is here now, and even without AI, our hyperconnected socio-technical culture already creates radically new dynamics and challenges for both human society and our environment.

AI and computer science, particularly machine learning but also HCI, are increasingly able to help out research in the social sciences.  Fields that are benefiting include political science, economics, psychology, anthropology and business / marketing. All true but automation causes economic inequality.

Blaming robots is insane, and taxing the robots themselves is insane.

This is insane because no robot comes spontaneously into being.  Robots are all constructed, and the ones that have impact on the economy are constructed by the rich which is creating a fundamental shift in the power and availability of artificial intelligence, and its impact on everyday lives. It creates a moral hazard to dump responsibility into a pit that you cannot sue or punish.Résultat de recherche d'images pour "pictures of legal robots"

Some people really expected AI to replace humans. These people don’t have enough direct, personal experience of AI to really understand whether or not it was human in the first place.

There is no going back on this, but that isn’t to say society is doomed.

The word “robot” is derived from the Czech word for “slave.”

Lets keep it that way: I am all for Technological self-reproduction – Slaves.

Unless we can re calibrate our tendency to exploit each other, the question may not be whether the human race can survive the machine age – but whether it deserves to.

← Back

Thank you for your response. ✨

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Share this:

  • Share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Share on Mastodon (Opens in new window) Mastodon

THE BEADY EYE SAYS: Artificial intelligence wasn’t supposed to work this way.

27 Monday Feb 2017

Posted by bobdillon33@gmail.com in Artificial Intelligence.

≈ Comments Off on THE BEADY EYE SAYS: Artificial intelligence wasn’t supposed to work this way.

Tags

Artificial Intelligence.

 

(Get Intelligent with a six-minute read)

OUT of the way, human.Résultat de recherche d'images pour "picture machine learning"

AI programs are starting to become smart enough and capable enough to replace human beings in our traditional and long-held professional roles—and beyond basic functions like taking food orders or processing simple transactions.

As each day passes by, Artificial Intelligence is getting smarter. A new AI program has now gained the ability to write its own code, by stealing code from other programs.

One advantage of letting an AI loose in this way is that it can search more thoroughly and widely than a human coder but the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

A world run by neural networked deep-learning machines requires a different workforce. Of course, humans still have to train these systems. But for now, at least, that’s a rarefied skill.

The code that runs the universe may defy human analysis. Another words the code will become less important than the data we use to train it.

First we write the code, then the machine expresses it. Machine learning suggests the opposite, an outside-in view in which code doesn’t just determine behavior, behavior also determines code but code does not exist separate from the physical world; it is deeply influenced and transmogrified by it.Résultat de recherche d'images pour "picture machine learning"

So it is fair to conclude that we are about to have a more complicated but ultimately more rewarding relationship with technology.

I don’t think so. Machine learning will have a democratizing influence. Instead of being masters of our creations, we will have learned to bargain with them, cajoling and guiding them in the general direction of our goals.

We are in the process of building our own jungle, and it has a life of its own. The rise of machine learning is the latest—and perhaps the last—step in this journey.

Computers are becoming devices for turning experience into technology and we may well go from commanding our devices to parenting them.

Already the companies that build this stuff find it behaving in ways that are hard to govern. Last summer, Google rushed to apologize when its photo recognition engine started tagging images of black people as gorillas and it is facing an antitrust investigation in Europe that accuses the company of exerting undue influence over its search results.

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand.

As networks grow more intertwined and their functions more complex, code has come to seem more like an alien force, the ghosts in the machine ever more elusive and ungovernable. Planes grounded for no reason.

Whether you like this state of affairs or hate it—whether you’re a member of the coding elite or someone who barely feels competent to futz with the settings on your phone—don’t get used to it. Our machines are starting to speak a different language now, one that even the best coders can’t fully understand.

To my mind this state of affairs is totally unacceptable because the digital revolution wormed its way into every part of our lives, it also seeped into our language and our deep, basic theories about how things work.

Facebook’s Mark Zuckerberg has gone so far as to suggest there might be a “fundamental mathematical law underlying human relationships that governs the balance of who and what we all care about.”

If you control the code, you control the world,” wrote futurist Marc Goodman.

Paul Ford was slightly more circumspect: “If coders don’t run the world, they run the things that run the world.”

Code is logical. Code is hackable. Code is destiny. These are the central tenets (and self-fulfilling prophecies) of life in the digital age, declaring the end of the age of Enlightenment, our centuries-long faith in logic, determinism, and control over nature.

We are surrounding ourselves with machines that convert our actions, thoughts, and emotions into data—raw material for armies of code-wielding engineers to manipulate.

With machine learning, programmers don’t encode computers with instructions. They train them by constantly deriving the relationship between billions of data points—generate guesses about the world.

Legal, medical, marketing, education, and even technological industries will all slowly be driven forward by machine workers and behind-the-scenes machine learning algorithms that can make the AI even better with minimal human interference.

What does this mean for the average worker? Are we all going to be jobless and homeless if we aren’t able to make any money?

Over the course of several years, which isn’t very long from a cultural transition standpoint, AI programs will become sophisticated enough to fully replace the roles most of us currently fill.

The top thinkers—the leaders, visionaries, and most experienced among us—will likely be “irreplaceable,” at least to preserve a cautious system of checks and balances to the still-relatively-new AI landscape, but the vast majority of us will be out of a job in our current capacities.

If you’re thinking to yourself, “there’s no way a machine can replace my job,” due to its demand for sophisticated processes like abstract thought or the use of language, consider three jobs already being replaced by AI programs, previously thought to be irreplaceable:

Automated investors and financial advisors—robotic programs to track medications- the news. (You’ve probably already read at least one article written by them without noticing.)

We have come to see life itself as something ruled by a series of instructions that can be discovered, exploited, optimized, maybe even rewritten.

Companies use code to understand our most intimate ties.

This, however, is going to create significant economic inequality world wise.

A disproportionate emphasis would fall on one niche skill set, and even though resources may be more plentiful (thanks again to AI-regulated processes in energy and agriculture), there could still be a serious discrepancy creating a rift between economic classes. Most of these debates were based on fixed beliefs about how the world has to be organized and how the brain worked.

All living cells that we know of on this planet are DNA-software-driven biological machines.

Even self-help literature insists that you can hack your own source code, reprogramming your love life, your sleep routine, and your spending habits but this no longer holds water.

The big urgent question is Artificial intelligence that is designed with profit overriding its code.

The world needs now not to-morrow a new Independent Organisation to vet all technology. If not we will have a world with all its present difficulties amplified ten fold.

The above is highly unlikely to happen. So if you don’t want to ripped off here is a small practical piece of advice.

Artificial Intelligence is currently able to recognise your face, your voice,  your most intimate ties, the balance of who and what we all care about.

Recently a voice recognition app was used by criminals to phone an individual with the authentic voice of a loved one. This person naturally responded to a request made by that voice loosing his or hers life savings.

The Advice:  Is to put in place a family code word that when requested or used authenticates all contacts.

I leave you with this. Résultat de recherche d'images pour "pictures of god's creation"

The practice of matching letters to numbers in any language is called Germatria and it is particularly prevalent in Judaism, which believes that the holy book, the Torah is the word of God-given to Moses, and that Gods messages are coded numerically in the Hebrew word of the Torah. Thus the number 7 is intrinsically linked with Creation and is regarded as God’s number.

← Back

Thank you for your response. ✨

 

 

 

 

 

 

 

 

Share this:

  • Share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Share on Mastodon (Opens in new window) Mastodon

THE BEADY EYE ASKS: IS TECHNOLOGY STRIPPING US OF LIVING A LIFE OF PURPOSE, LEAVING US WITH ON SUBSTANTIVE CONTENT.

22 Wednesday Feb 2017

Posted by bobdillon33@gmail.com in Artificial Intelligence., Big Data., Facebook, Google it., Google Knowledge., Humanity., Life., Scientific., Social Media., Technology, The Future, The Internet., The world to day., Unanswered Questions., What Needs to change in the World, Where's the Global Outrage., World Organisations.

≈ Comments Off on THE BEADY EYE ASKS: IS TECHNOLOGY STRIPPING US OF LIVING A LIFE OF PURPOSE, LEAVING US WITH ON SUBSTANTIVE CONTENT.

Tags

Artificial Intelligence., Big Data, Inequility, The Future of Mankind, THE UNITED NATIONS, Visions of the future.

 

( A Ten minute read, that challenges the reader to leave a comment.)

Something is profoundly wrong with the way we live today.

People’s characters, conceptions and behaviour are socially and culturally are being constructed by Data. We are living in a data explosion.

Like every period of significant rupture and change throughout history, the data-evolution we are witnessing is in urgent need of a stronger ethical and critical backbone.

Big Data is creating a new kind of digital divide: “the Big Data rich and the Big Data poor.” Inequality has become an essential part of the system that creates, stores and makes data accessible.When Information Explosion Meets Big Data

Tech giants like Google are creating what some call an “intellectual monopoly,” as universities’ best brains are hired to work with their exclusive access to privately harvested data to produce scientific results which are often not shared publically if they are profitable.

The Internet, has become an alternative space of consumption, production and social interaction. It is an increasingly influential space where the future divisions and similarities between people are being formed and the political and economic rules and structures that govern this space called Internet deserve our critical attention.

Ninety percent of data that exists in the world today was created in the past two years. This mass explosion of data – and our increasing reliance on it is creating a very disturbed place devoid of human life and filled with whirring fibre optic cables, servers and generators to convey the vastness of the web through binary code and pixels:

The majority of data which exists nowadays is made not by governments or scientific organisations but by ordinary citizens.

It’s the kind of information that most people share without a second thought, but when compiled in physical form, presents a surprisingly discernible narrative from hobbies and habits to musical tastes and conversations.

I am all for Technology but its impact on organisations and institutions will be profound.

Governments, armies, churches, universities, banks and companies all evolved to thrive in relatively murky epistemological environment, in which most knowledge was local, secrets were easily kept, and individuals were, if not blind, myopic.

When these organisations suddenly find themselves exposed to daylight, they quickly discover that they can no longer rely on old methods; they must respond to the new transparency or go extinct.

They are struggling to cope with transparency.

In my last post I asked the question – are we just becoming fodder for Artificial Intelligence, ie Data.

Don’t get me wrong, data is a treasure trove when it comes to health, predicting the climate, space, and the like. Community projects such as Open Street Map and Safecast‘s work to record radiation levels in Japan.

Big data’s impact on politics can also be beneficial such as Madrid City Council site, which acts as an open consultation platform where people can have their say on issues from bull fighting to transport proposals, something we’ll likely see a lot more of over the next few years.

We will see more and more live data streams on a map of the capital, showing Tweets, Instagram posts and TfL updates, while another by Future Cities Catapult asks users to make decisions about housing, energy, transport and building projects, and uses data modelling to predict the effects those decisions would have over the next 20 years.

Now I am no data mining scientist but it seems to me that  the data world is not clear-cut, whilst a good data visualisation is worth a thousand words, it does not automatically follow that it tells the whole truth.

Machines are learning to recognize all sorts of patterns in the data at a scale and speed humans couldn’t possibly manage to do on their own. It’s not just data on its own, it’s data from a gigapixel imaging devices that can scan the whole body for indications of cancer, or data captured by sensors installed in self-driving cars about nearby objects and vehicles in motion that can eliminate sources of human error and make self-driving cars possible.

Whole industries are being disrupted by those who know how to tap the new potential of the right information in the right place at the right time.

The whole Big Data thing started with Google.

Some estimates put the total amount of data generated each day at 2.5 quintillion bytes!

Résultat de recherche d'images pour "pictures of data centers"Ben Bor_Data getting smaller 1

While the massiveness of data boggles the mind with ease, the granularity of it is equally staggering when you consider the individual sources of the stuff.

The Large Hadron Collider at CERN generates about 30 Petabytes per year (as a result of 600 million collisions per second generating data in their detectors.

The Synoptic Survey Telescope generates 30 Terabytes of astronomical data per night.

In 2010 the list of largest databases in the world quotes the World Data Centre for Climate database as the largest in the world, at 220 Terabyte (possibly because of the additional 6 Petabyte of tapes they hold, albeit not directly accessible data). By the end of 2014, according to the Centre’s web site, the database size is close to 4 Petabyte (roughly 2 Petabytes of these are internal data).

Every interaction that every user has with any piece of technology produces more of it, and as people are becoming more comfortable using technology and more reliant on the information it provides, they want to use more of that data in simple and rewarding ways.

Although it may be logical to assume that we retain the power to control our digital privacy, like the bar-coded plastic membership cards that dangle from our key chains, our privacy is quickly slipping through our fingers.

As surveillance technologies shrink in cost and grow in sophistication, we are increasingly unaware of the vast, cumulative data we offer up.

Of course not many of us are concerned in an era when cellphone data, web searches, online transactions, and social-media commentary are actively gathered, logged, and cross-compared, we’ve seemingly surrendered to the inevitability of trade-offs in a digital future.

Mobile devices themselves are becoming the primary access point for information.

There is nothing new about this data digital culture,  however significant changes are happening — some are obvious while others are below the surface. We’re only just starting to see how revolutionary big data can be, and as it truly takes off, we can expect even more changes on the horizon.

While digital natives are comfortable with technology, the question is: which technology, in which context?

There are now more mobile phones on Earth than there are people! And most of these phones have cameras. Yet Google Glass feels invasive because of its ability to record video.

As wearable technology is getting its toehold embedded technology, it’s not so much about the technology, but when, all of a sudden, things go from impossible (or immoral) to ubiquitous only a fraction of the world is going to benefit.

The fact is that when we all start to wear wearables, the intimacy level will be much higher that we cannot avoid considering how these devices literally change who we are and our bodily engagement with the world.

For example when one buys a Fitbit because they desire to be seen as fitness-conscious, just as much as they seek truth in quantification. Their exercise routine or daily walks are an act of designing a better self, so the device simply becomes part of that ecosystem.

A teleological view of human nature is inherently dynamic.

We know what things cost but have no idea what they are worth. We know longer ask of a judicial ruling or a legislative act: Is it good? Is it fair? Is it just? Is it right? Will it help to bring about a better society or a better world?

In the words of moral and political philosopher Alasdair MacIntyre, this teleological view maps out the journey between “man-as-he happens-to-be” and “man-as-he-could-be-if-he realized-his-essential-nature.”

Those who surrender freedom for security will not have, nor do they deserve, either one.

The inevitable price of the convenience of opting in is compromise.

The promise of big data cannot be segregated from this price.

Embracing the radical transparency at our threshold, many see a potentiality that far outweighs the threat—after all, what do we have to hide?

Yet, privacy is not secrecy—and while there are things we should be comfortable bearing, our dignity should not be one of them.

Whistleblower Edward Snowden said his biggest fear was that we “won’t be willing to take the risks necessary to stand up and fight to change things.”

Machines will win our hearts with every step they take in evolution. Undoubtedly, this is a co-evolution.

It’s a symbiotic relationship where we are becoming more and more enmeshed and less aware of the capacity of this evolving interconnection. It’s a compulsory affair built on convenience and reward.

Arguably, we are no more mindful of the bits and bytes that we tap, swipe, and key than we are of our own breathing.

The true heirs of this data are platforms like Facebook, Google, Microsoft and others that we have gifted seemingly insignificant data to—under the guise of “sharing.”

As more mobile devices enter the world, they generate more and more data that needs to be understood, analyzed, presented, and consumed.

There is already so much data stored in the world that we are running out of ways to quantify it.

Data is quickly becoming the primary content of the 21st century.

Humankind is able to store at least 295 exabytes of information. (Yes, that’s a number with 20 zeroes in it.)

For 30 years we have made a virtue out of the pursuit of material self-interest: Indeed, this pursuit now constitutes whatever remains of our sense of collective purpose.

The sense of living a life of purpose, meaning, sociality, and mutuality are disappearing. These scenes used to be the backbone to political questions, even if they invited no easy answers.

Modern economics focuses a lot on incentives, but not nearly enough on intrinsic motivation.

Samsung has just warned its customers that their smart televisions may be impinging their privacy.

Facebook is now a public entity. It claims to have upwards of 300 Petabyte of data in their (so-called) data warehouse;

Fortunately there is a series of mixed media installations that encourage visitors to think twice about the information they post online.

If you don’t want them to share your photos and information in your profile updates and statuses you need to issue the following statement. I declare that I have not given my permission to Facebook to use my photos or any information in my profile, my updates and my statuses.

Twitter has produced a millionaire buffoon as president of the USA.

Three examples of a big difference in perception and expectations.

Our lack of control over the data we upload serve as a chilling reminder of global governments’ power to use personal data without our consent, and the extreme lengths used to conceal surveillance programmes.

We must learn once again to pose questions of our governments  by taking a fresh look at democracy. 

The conversation, both national and world-wide, is terrifically out of balance, with near-total focus on what’s broken and how we should fix it, and so little focus on stories of attractive, desirable possibilities we might agree to work toward. 

To tackle social problems in their entirety, organisations need to mount a collective approach. It is the role of statesmanship – always in short supply – to remind us of the enduring commonalities that we are forever in danger of overlooking.

We are currently opting  into an unfathomable interdependency with an  urgent need to re-evaluate our daily interactions with technology and their impact on the fidelity of our privacy.

What that ecosystem and the devices that inhabit it will look like 20, 10, or even five years from now is anyone’s guess and it’s not at all comfortable.

We need a more controlled understanding of Big Data before headgear and an apps allows users to control products using their brainwaves.

Data itself is of no value if it is just being stored and not converted into useful information or actionable insight.

As I have said in the last post the AI genie is out of the bottle with no way to get it back in. So, knowing what you know now, do you choose the red pill or the blue one?

Red for access to a digital divided world.

or

Blue for a digital world where all technology is vetted by an Independent totally transparent New World organisation.  Called Click.

All comments welcome all like clicks chucked in the bin.

← Back

Thank you for your response. ✨

 

 

 

 

 

 

 

 

 

 

 

…

 

 

 

 

 

 

 

Share this:

  • Share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Share on Mastodon (Opens in new window) Mastodon

THE BEADY EYE SAYS: AS A SPECIES IF WE ARE NOT CAREFUL WE ARE GOING TO END UP AS FOOD.

18 Saturday Feb 2017

Posted by bobdillon33@gmail.com in Artificial Intelligence., Big Data., Humanity., Innovation., Sustaniability, Technology, The Future, The world to day., Unanswered Questions., What Needs to change in the World, Where's the Global Outrage.

≈ Comments Off on THE BEADY EYE SAYS: AS A SPECIES IF WE ARE NOT CAREFUL WE ARE GOING TO END UP AS FOOD.

Tags

Artificial Intelligence., Inequility, Technology, The Future of Mankind, Visions of the future.

The AI genie has already been released from the bottle and there is no way to get it back in. The relationship between the perception of intelligence and thinking is no longer straightforward. Robotic systems continue to evolve, slowly penetrating many areas of our lives, from manufacturing, medicine and remote exploration to entertainment, security and personal assistance.

If we are not careful we are all just becoming food:  Called Data.

If the field of artificial intelligence (AI) continues to develop at its current dizzying rate, the singularity could come about in the middle of the present century. So we are left with a couple of decades to re-set the brave new world of artificial intelligence.

Whether you believe that singularity is near or far, likely or impossible, apocalypse or utopia, the very idea raises crucial philosophical and pragmatic questions, forcing us to think seriously about what we want as a species.

While we all stand by in silence, AI is only getting better, as computational intelligence techniques keep on improving, becoming more accurate and faster due to giant leaps in processor speeds.

Regardless of how artificial intelligence develops in the years ahead, almost all pundits agree that the world will forever change as a result of advances in AI.

The singularity presents both an existential threat to humanity and an existential opportunity for humanity to transcend its limitations.

We are entering a period of what I call Non Synergistic Evolution. (SE)

This period requires a species to be aided in its evolutionary process by another species. We are the guinea pigs species feeding AI with data which will act as the food or fuel that allows those higher up the chain to exist and evolve. Once this happens, with the evolution of some very clever tools, weapons, and body parts Ai will become an integral part of the human species tree creating … a new branch on the tree of evolution.

To avoid all of us becoming obsolete we need to create an extension of the human branch and not AI that exploits us which will give us a world with inequalities in every form that you can think of.

The fact that our behaviour can radically change without a shift in either explicit or implicit motivations—with no deliberate decision to refocus—seems insidious for the future of mankind.

Instead of emphasizing formal operations on abstract symbols, I suggests that thinking beings ought be considered first and foremost as acting beings.  As such we need to radically change the education of the next generation

The fact that most real-world thinking occurs in very particular (and often very complex) environments, is employed for very practical ends, and exploits the possibility of interaction with and manipulation of external props will never be understood by AI. It will be ignored.

Reason is evolutionary, We, like all animals, are essentially embodied agents, and our powers of advanced cognition vitally depend on a substrate of abilities for moving around in and coping with the world which we inherited from our evolutionary forbears.

Thinking beings ought therefore be considered first and foremost as acting beings, NOT DATA, as it will not be long before we may find ourselves losing individual opportunities for decision-making, as the agency of our collectives become stronger, and their norms therefore more tightly enforced.

THERE IS NO ROOM FOR COMPLACENCY.

Food is being genetically modified and humans will follow suit.  Is it to feed the world or for profit.

Whatever the next step is to be in human cognitive progress, it ought to be based on a better and more thorough understanding of intelligence than we have so far managed.

Humans and human society have so far proved exceptionally resilient, presumably because of our individual, collective and prosthetic intelligence.

But what we know about social behaviour indicates significant policy priorities are required.  If we want to maintain flexibility, we should maintain variation in our populations. If we want to maintain variation and independence in individual citizens’ behaviour, then we should protect their privacy and even anonymity.

I just don’t see why it is that anyone would want to live for ever, in a world that is governed by voice recognition. Where you know nobody, and are monitored to see what you are up to.

The potential of Artificial Intelligence is enormous and in fact a 2013 study by Oxford University estimated that Artificial Intelligence could take over nearly half of all jobs in the United States in the near future.

The global workforce would have to transform.

Perhaps the biggest unanswered question is: Will there be enough good jobs to keep the global economy growing? After all, AI systems aren’t consumers and consumers are the sine qua non of economic growth.

Social power is one of the most pervasive social concepts in human societies because of its function as a social heuristic for decision-making.

Re-conception of human cognition has implications not just for the project of creating artificial intelligence, but for the related project of harnessing computation to enhance human intelligence.

AI is changing what collective agencies like governments, corporations and neighbourhoods can do. Algorithms ‘learn’ from past not from the future.

They may well relieve engineers of the need to write out every command, but when they manipulate the Stock Exchange for profit, determine whether you are a viable risk or not, they are encroaching in areas of life that effect all of us. 

If automation keeps going at the sped it is, man will atrophy all his limbs but the push button finger. It is crucial vision alone which can mitigate the unimpeded operation of the automatic.

The ultimate vindication of AI-creativity would be a program that generated novel ideas which initially perplexed or even repelled us, but which was able to persuade us that they were indeed valuable. We are a very long way from that.

Now is the time to establish a New World Organisation to vet all technology. ( See previous posts)

All comments appreciated, all push button likes, chucked in the bin.

← Back

Thank you for your response. ✨

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Share this:

  • Share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Share on Mastodon (Opens in new window) Mastodon

THE BEADY EYE ASKS: HAVE WE ALL GONE BONKERS.

12 Sunday Feb 2017

Posted by bobdillon33@gmail.com in Artificial Intelligence.

≈ Comments Off on THE BEADY EYE ASKS: HAVE WE ALL GONE BONKERS.

Tags

Artificial Intelligence., The Future of Mankind, Visions of the future.

( Eight minute read.)

I ask this question because, what happens when we share the planet with self-aware, self-improving machines that evolve beyond our ability to control or understand?

What sort of future do you want?

Should we develop lethal autonomous weapons?

What would you like to happen with job automation?

What career advice would you give today’s kids?

Would you prefer new jobs replacing the old ones, or a jobless society where everyone enjoys a life of leisure and machine-produced wealth?

Further down the road, would you like us to create super intelligent life and spread it through our cosmos?

Will we control intelligent machines or will they control us?

Will intelligent machines replace us, coexist with us, or merge with us?

What will it mean to be human in the age of artificial intelligence?

What would you like it to mean, and how can we make the future be that way?


I have lost count of how many similar articles I have seen.Résultat de recherche d'images pour "pictures of AI intelligence"

Typically, these articles are accompanied by an evil-looking robot carrying a weapon, and they suggest we should worry about robots rising up and killing us because they’ve become conscious and/or evil.

In fact, the main concern of the beneficial-AI movement isn’t with robots but with intelligence itself: specifically, intelligence whose goals are misaligned with ours.

To cause us trouble, such misaligned superhuman intelligence needs no robotic body, merely an internet connection – this may enable out smarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand.

Because AI has the potential to become more intelligent than any human, we have no surefire way of predicting how it will behave.

Even if building robots were physically impossible, a super-intelligent and super-wealthy AI could easily pay or manipulate many humans to unwittingly do its bidding.

Civilization will flourish as long as we win the race between the growing power of technology and the wisdom with which we manage it.

In the case of AI technology, the best way to win that race is not to impede the former, but to accelerate the latter, by supporting AI safety research.

We, that is all of us are so distracted by technology, that we are blind to what is happening with Artificial Intelligence.

WE MUST ESTABLISH A WORLD GOVERNING BODY THAT GIVES ALL FORMS OF ARTIFICIAL INTELLIGENCE A CERTIFICATE OF HEALTH, ESPECIALLY ALL ALGORITHMS THAT PURSUE PROFIT.

The quest for strong AI would ultimately succeed was long thought of as science fiction, centuries or more away.

However, thanks to recent breakthroughs, many AI milestones, which experts viewed as decades away merely five years ago, have now been reached, making many experts take seriously the possibility of super intelligence in our lifetime.

While some experts still guess that human-level AI is centuries away.

With all the wonderful attributes that humans have display , (since we feel out of the trees) for exploration is it not just plain bonkers that we allow the development of Artificial Intelligence to proceed on a willy nilly bases.

It’s smart to start safety research now to prepare for the eventuality.

Many of the safety problems associated with human-level AI are so hard that they may take decades to solve.

So it’s prudent to start researching them now rather than the night before some programmers drinking Red Bull decide to switch one on.

It may be that media have made the AI safety debate seem more controversial than it really is. After all, fear sells, and articles using out-of-context quotes to proclaim imminent doom can generate more clicks than nuanced and balanced ones.

However, physicists know that a brain consists of quarks and electrons arranged to act as a powerful computer, and that there’s no law of physics preventing us from building even more intelligent quark blobs.

You could say that it’s still at least decades away.

We have all walked out of a cinema after viewing a futuristic movie that has had either large floating cities, with space crafts hovering, departing or landing all with swishing doors and hologram screens showing 3D images of the universe.

All controlled by a robot with super artificial intelligence systems that either intentionally or unintentionally cause great harm.

Of course none of this keeps you awake at night because machines can’t have goals!

Wrong:

Machines can obviously have goals in the narrow sense of exhibiting goal-oriented behavior:

The behavior of a heat-seeking missile is most economically explained as a goal to hit a target. If you feel threatened by a machine whose goals are misaligned with yours, then it is precisely its goals in this narrow sense that troubles you, not whether the machine is conscious and experiences a sense of purpose. If that heat-seeking missile were chasing you, you probably wouldn’t sleep well.

Take it a step further:

An AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation.

All of this is in the far unseeable future and the images are a fantasy of the mind.

Not for much longer:

If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for.

A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we have a problem.

The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult.

Some experts have expressed concern, though, that it might also be the last, unless we learn to align the goals of the AI with ours before it becomes super intelligent.

If a super intelligent system is tasked with an ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.

It could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind.

In the long-term, an important question is what will happen if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks.Résultat de recherche d'images pour "pictures of AI intelligence"

Super intelligent AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent.

But rest assured AI will show no subjective feelings to the biggest event in human history.

It’s time to take this conversation beyond a few hundred technology sector insiders.

As far as our future is concerned, the narrow domains we yield to computers are not all created equal. Some areas are likely to have a much bigger impact than others.

All the things we humans value (love, happiness, even survival) are important to us because we have particular evolutionary history – a history we share with higher animals, but not with computer programs, such as artificial intelligences.

We don’t yet know exactly what makes human thought different from current generation of machine learning algorithms, for one thing, so we don’t know the size of the gap between the fixed bar and the rising curve.  I am not saying that we are all going to be wiped out in the near future by some deranged machine or program.

I am saying that a good first step, would be to stop treating intelligent machines as the stuff of science fiction, and start thinking of them as a part of the reality that we or our descendants may actually confront, sooner or later.

If we dont want a world run by Google Knowledge, Facebook, Amazon, Twitter, that have their AI brain in the cloud and will without a doubt move from the legacy world of retrospective data analysis to one in which systems make inferences and predictions, to intent and desire in real time.

All of this only touches the surface of the issues and difficulties that lie ahead.

It isn’t just about making things easier, it will touch and is touching every aspect of our personnel and public lives, which is why we need to thinks carefully and ethically about how we apply, build, test, govern, and experience machine intelligence.

In the end we cannot leave the above to the market place, to the government’s, to the United Nations, or the Scientific world.Résultat de recherche d'images pour "pictures of deranged eyes"

We must have an totally independent, transparent, legally responsible, fully funded World Organisation, called for instance; Click World OK where all AI programs are examined and given a World Health Certificate.

You would be right to ask who would fund this Organisation.

Every country would be asked to make a donation. These donations would be repayable by the Organisation placing a World Aid commission on all profit making programs.

This can only be achieved by all of us demanding so.

All comments appreciated . All AI like clicks chucked in the bin.

← Back

Thank you for your response. ✨

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Share this:

  • Share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Share on Mastodon (Opens in new window) Mastodon

THE BEADY EYE SAYS: ARTIFICIAL INTELLIGENCE IS NOT A SETTLED SCIENCE;

05 Sunday Feb 2017

Posted by bobdillon33@gmail.com in Artificial Intelligence., Big Data., France., Google Knowledge., HUMAN INTELLIGENCE, Humanity., Innovation., Our Common Values., Technology, The Future, The world to day., Unanswered Questions., What Needs to change in the World, Where's the Global Outrage.

≈ Comments Off on THE BEADY EYE SAYS: ARTIFICIAL INTELLIGENCE IS NOT A SETTLED SCIENCE;

Tags

Artificial Intelligence., Technology, The Future of Mankind, Visions of the future.

 

( A seven minute read)

I HAVE WRITTEN ON THIS SUBJECT IN PREVIOUS POST : IN WHICH I ADVOCATED THAT THERE IS A URGENT NEED TO GET A HANDLE ON WHAT I CALL COMMERCIAL ARTIFICIAL INTELLIGENCE.

ALL FORMS OF AI WHETHER THEY BE APPS OR PRODUCTS CONTAINING ALGORITHMS SHOULD BE VETTED BY AN INDEPENDENT WORLD ORGANIZATION TO ENSURE THEIR TRANSPARENCY AND ACCOUNTABILITY.

Like all threats in the world the threat that Artificial Intelligence poses to the world will only be recognised when it is too late.Afficher l'image d'origine

WHY?

Because:  We live in a world where there is very little left that is biennial.

We can rest assured that the world of technology will follow suite, creating more inequality than anything we have seen to date.

In the old days, you would need a rule set to say ‘if this happens, do that.

With AI there are no such mantra. It’s a free for all in sundry, irrelevant of any legal system or ethics. 

Because: We are only beginning to scratch the surface with AI chatbots.

The sudden surge in interest in AI is closely linked to big data a more recent tech trend that has breathed fresh life into commercial AI development for profit.

General-purpose AI is still, at least for now, the domain of science fiction.

Real life AI software, tends to be much more purpose-driven and limited in its applicability. But that doesn’t mean businesses can’t see real value from more modest AI applications.

The market for AI applications is white-hot with huge potential, but that potential needs to be tempered by a heavy dose of realism about the capabilities and business value of artificial intelligence technology.

It’s sort of captured the imagination of the world in general, but the danger we have with AI is expectations getting too high.

What’s different this time is cheap storage, which has allowed companies to stash huge troves of data, a critical need for training machine learning algorithms — the “brains” behind artificial intelligence. Computing power has increased to the point where algorithms can churn through all this data nearly instantaneously.

Facebook announced this month that it would allow businesses to build chatbots using the AI engine in its Messenger app.

Microsoft made a similar announcement last month.

IBM has been one of the bigger players in the AI platform space ever since it made Watson available to developers.

So far developers have used it to build smarter travel planning assistants, shopping recommendation engines and health coaches.

Google, Facebook and other technology giants are racing to apply the technology to consumer products. All are placing serious bets on deep learning, neural networks and natural language processing.

The social media maven recently signaled its commitment to advancing these types of machine learning by hiring Yann LeCun, a well-regarded authority on deep learning and neural nets, to head up its new artificial intelligence (AI) lab.

Insurance companies are looking at applying it to the process of approving medical claims.

Retailers are applying it to customer service and marketing with enterprise technology companies like Salesforce looking to embed it in their software.

But even as businesses are finding real value in AI applications, there’s a widening pitfall.

Success breeds hype, which itself leads to inflated expectations. Should burgeoning AI software fail to live up to unrealistic expectations, it could brew disappointment and stain the technology.

In fact, artificial intelligence has come so far so fast in recent years, it will be pervasive in all new products by 2020.

So we are at a tipping point …

Artificial intelligence belongs to the frontier, not to the textbook.

Artificial intelligence is expected to be ubiquitous within just five years, as developers gain access to cognitive technologies through readily available algorithms.

Artificial intelligence chatbots aren’t the norm yet, but within the next five years, there’s a good chance the sales person emailing you won’t be a person at all.

All of this is proceeding without much scrutiny: So in this post I will perforce analyzed the matter from my own perspective; given my own conclusions and done my best to support them in limited space.

Let’s start with a useful definition of artificial intelligence.

The term “Artificial Intelligence” refers to a vastly greater space of possibilities than does the term “Homo sapiens.” When we talk about “AIs” we are really talking about minds-in-general, or optimization processes in general. It is the theory and development of computer systems able to perform tasks that normally require human intelligence.

While cognitive technologies are products of the field of artificial intelligence.

They are able to perform tasks that only humans used to be able to do.

Organizations in every sector of the economy are already using cognitive technologies in diverse business functions.

If current trends in performance and commercialization continue, we can expect the applications of cognitive technologies to broaden and adoption to grow.

Billions of investment dollars have flowed to hundreds of companies building products based on machine learning, natural language processing, computer vision, or robotics suggests that many new applications are on their way to market.

We also see ample opportunity for organizations to take advantage of cognitive technologies to automate business processes and enhance their products and services.

If you look at technology we have to-day you could say that it is the knack of so arranging the world that we don’t have to experience it.

We must execute the creation of Artificial Intelligence as the exact application of an exact art.

And maybe then we can win.

I suspect that, pragmatically speaking, our alternatives boil down to becoming smarter or becoming extinct.

Historians will look back and describe the present world as an awkward in between stage of adolescence, when humankind was smart enough to create tremendous problems for itself, but not quite smart enough to solve them.

We are for the moment subject to natural selection which isn’t friendly, nor does it hate you, nor will it leave you alone.

The point about underestimating the potential impact of Artificial Intelligence is symmetrical around potential good impacts and potential bad impacts.

When something is universal enough in our everyday lives, we take it for granted to the point of forgetting it exists.

It may be tempting to ignore Artificial Intelligence because,of all the global risks but we do so AT GRAVE RISK OF CREATING A DIGITAL DIVIDE WORLD.  Afficher l'image d'origine

We cannot query our own brains for answers about nonhuman optimization processes— whether bug-eyed monsters, natural selection, or Artificial Intelligences.

DUP-1030_WP-intro-image

How then may we proceed?

How can we predict what Artificial Intelligences will do?

The human species came into existence through natural selection, which operates through the non chance retention of chance mutations.

Artificial Intelligence comes about through a similar accretion of working algorithms, with the researchers having no deep understanding of how the combined system works. Nonetheless they believe the AI will be friendly,with no strong visualization of the exact processes involved in producing friendly behavior, or any detailed understanding of what they mean by friendliness.

Friendly AI is an impossibility, because any sufficiently powerful AI will be able to modify its own source code to break any constraints placed upon it.

This does not imply the AI has the motive to change its own motives.

Sufficiently tall skyscrapers don’t potentially start doing their own engineering.

Humanity did not rise to prominence on Earth by holding its breath longer than other species.

Humans evolved to model other humans—to compete against and cooperate with our own conspecifics.

Robots will not.

It’s mistaken belief that an AI will be friendly which implies an obvious path to global catastrophe.

Artificial Intelligence is not an amazing shiny expensive gadget to advertise in the latest tech magazines.

Artificial Intelligence does not belong in the same graph that shows progress in medicine, manufacturing, and energy.

Artificial Intelligence is not something you can casually mix into a lumpen futuristic scenario of skyscrapers and flying cars and nanotechnologies red blood cells that let you hold your breath for eight hours.

A sufficiently powerful Artificial Intelligence could overwhelm any human resistance and wipe out humanity. (And the AI would decide to do so.)

Therefore we should not build AI.

On the other hand.

A sufficiently powerful AI could develop new medical technologies capable of saving millions of human lives. (And the AI would decide to do so.)

Therefore we should build AI.

Once computers become cheap enough, the vast majority of jobs will be performable by Artificial Intelligence more easily than by humans.

A sufficiently powerful AI would even be better than us at math, engineering, music, art, and all the other jobs we consider meaningful. (And the AI will decide to perform those jobs.) Thus after the invention of AI, humans will have nothing to do, and we’ll starve or watch television.

So should we prefer that nanotechnology precede the development of AI, or that AI precede the development of nanotechnology?

As presented, this is something of a trick question.

The answer has little to do with the intrinsic difficulty of nanotechnology as an existential risk, or the intrinsic difficulty of AI. So far as ordering is concerned, the question we should ask is, “Does AI help us deal with nanotechnology? Does nanotechnology help us deal with AI?”

The danger of confusing general intelligence with Artificial Intelligence  is that it leads to tremendously underestimating the potential impact of Artificial Intelligence.

The best way I can think of to train computers to be able to get them watch a lot of videos and observe what they Predict.

Prediction is the essence of intelligence.

All scientific ignorance is hallowed by ancientness.Philosophy of A.I. Searles strong AI hypothesis: "The appropriately programmed computer with the right inputs & output...

Here is a closing thought.

When a Super Intelligent Robot returns to earth from a voyage in space how can it be trusted to tell us the truth.

Exactly how AI systems should be integrated together is still up for debate.

With every advance, and particularly with the advances in machine learning and deep learning more recently,we get more tools to fuck up the world we all live on.

Ours is a less than excessively age.

We know so much and feel so little.

All comments welcome, all like clicks chucked in the bin.

← Back

Thank you for your response. ✨

 

 

 

Share this:

  • Share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Share on Mastodon (Opens in new window) Mastodon

THE BEADY EYE SAYS: CAPITALISM’S IS DRIFTING TOWARDS A CULTURAL APOCALYPSE.

30 Monday Jan 2017

Posted by bobdillon33@gmail.com in Artificial Intelligence., Donald Trump Presidency., European Union., HUMAN INTELLIGENCE, Humanity., Life., Modern day life., Natural World Disasters, Our Common Values., Social Media., Sustaniability, Technology, The Future, The USA., The world to day., Unanswered Questions., United Nations, What Needs to change in the World, Where's the Global Outrage., World Organisations., World Politics

≈ Comments Off on THE BEADY EYE SAYS: CAPITALISM’S IS DRIFTING TOWARDS A CULTURAL APOCALYPSE.

Tags

Artificial Intelligence., Capitalism and Greed, Capitalism vs. the Climate., Community cohesion, Distribution of wealth, European Union, Globalization, Inequility, Technology, The Future of Mankind, THE UNITED NATIONS, Visions of the future.

( A two-minute follow-up read to the Post ” What is happening to what we call common values.)

Afficher l'image d'origine

Perhaps with the election of Donald Trump it has already happened.

Why?

Because capitalism has and still is creating an explosion in economic and geographic inequality which is now fueled by commercial Artificial Intelligence.

The tragedy is that our World leaders and World Organisations seem inapt to do anything about it.

The main lesson for European and the rest of the world is clear:Afficher l'image d'origine

As a matter of urgency globalization must be fundamentally reorientated.

Trade agreements must be revisited to become a means in the service of higher ends.

They must include quantifying and binding measures to combat the digital fiscal and climate dumping.

They must have a prosecutor capable of enforcing what is agreed.

Its time to change the political discourse on globalization, trade is a good thing, but fair and sustainable development also demands public services, infrastructure, health and education. These demand fair taxation systems

If we fail to deliver these the ludicrous fantasy of Trumpism testosterone imperialism will win with the dignity of world leaders reduced to one’s shopping choices.

Here are a few other thought as to why:Afficher l'image d'origine

Because: Globalisation it is being replaced in economic by Artificial Intelligence calculation to satisfy consumer demands.

Because: With Trump closing of the USA will change the domination of the capitalism globe.  It will now exist for a Chinese Communist party that gives delocalised capitalist enterprise cheap labour to lower prices.

Because:  Technology – along with its turbo economic disruption is causing what seems to me to be the hastening of both a cultural and environmental apocalypse.

Because:  Digital consumerism makes us too passive to revolt or save the world. Humans have been transferred into desirable readily exchangeable commodities. Culture appears more monolithic than ever. Google, Apple, Facebook, Amazon, are now presiding over unprecedented monopolies.

Because: The Internet discourse has become tighter, more coercive.

Because:  Human personality is being corrupted by false news creating false consciousness that there is hardly anything worth the name anymore.

Because:  Common Values are scarcely signifies any more – than white skin, white teeth and freedom from odour and emotions.

Because:  Popularising, is a failure of the US and the EU to democratise in an attempt to create a one-dimensional society.

Because:  Social Media operates on an eternal feeding loop.

Because:  Our world organisations are out of date.

Because: Trade agreements aren’t worth the paper they are written.

Because: If we destroy or Atmosphere , or Seas, or Fresh Water all for the sake of profit, there is little reason to believe in a Christian or Muslim God or for that matter any other Gods that will make a difference.Afficher l'image d'origine

All comments appreciated. All likes clicks chucked in the bin.

← Back

Thank you for your response. ✨

 

Share this:

  • Share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Share on Mastodon (Opens in new window) Mastodon

THE BEADY EYE ASKS: WHAT IS HAPPENING TO WHAT WE CALL COMMON VALUES?

29 Sunday Jan 2017

Posted by bobdillon33@gmail.com in Artificial Intelligence., Donald Trump Presidency., England EU Referendum IN or Out., European Union., Google it., HUMAN INTELLIGENCE, Humanity., Life., Modern day life., Our Common Values., Politics., Technology, The Future, The world to day., Unanswered Questions., What Needs to change in the World, Where's the Global Outrage.

≈ Comments Off on THE BEADY EYE ASKS: WHAT IS HAPPENING TO WHAT WE CALL COMMON VALUES?

Tags

Artificial Intelligence., Community cohesion, Digital Divide., European Union, Our Common Values., Technology, The Future of Mankind, Visions of the future.

 

(A twelve-minute read if you value your time)

For some naive reason I thought this would be an easy subject to write on.Afficher l'image d'origine

After all, we all value fresh air, clean water, and the other essential to living- Life.

If we remove our personal values and look at our shared convictions regarding what we believe is important and desirable , of course, we are left with valuing the right things and surely they are common values but the term “values” means different things in different contexts.

So much so that we are no longer connected by Our Common Values.

In reality we understand that our choices are always significantly limited, and that our values shift over time in unpredictable ways.

This is especially true with emerging technologies, where values that may lead one society to reject a technology are seldom universal, meaning that the technology is simply developed and deployed elsewhere. In a world where technology is a major source of status and power, that usually means the society rejecting technology has, in fact, chosen to slide down the league tables.

Take for instance choice.

To say that one has a choice implies, among other things, that one has the power to make a selection among options, and that one understands the implications of that selection. Obviously, reality and existing systems significantly bound whatever options might be available. In 1950, I could not have chosen a mobile phone:

So it is premature to say that we understand how to implement meaningful choice and responsible values when it comes to emerging technologies.

Technology is changing far faster than the institutions we’ve traditionally relied on to inform and enforce our choices and values.

However current progress in meeting the profound challenges that humanity must confront falls far short of what is needed.

Combined with the need for a new understanding about the way that people think raises complex ethical questions concerning our common values makes it a complex subject to address.Holistic Approach

So let’s try and address it under these broad headings.

The Rule of Private Gain. If you are the only one personally gaining from the situation, is it is at the expense of another?  If so, you may benefit from questioning your ethics in advance of the decision.

If Everyone Does It. Who would be hurt? What would the world be like? These questions can help identify unethical behaviors.

Benefits vs. Burden. If benefits do result, do they outweigh the burden?

Or we can bury our heads in the sand, and insist on the sanctity of Enlightenment reason.

Or we can respond to the new understanding of how decision-making processes work, by demanding that there is public scrutiny of the effect that particular communications, campaigns, institutions and policies have on cultural values, and the impact that values, in turn, have on our collective responses to social and environmental challenges.

The first thing that struck me, is that these days there is no such thing as value-neutral policy.

Often, if the facts don’t support a person’s values, “the facts bounce off”

If you need an example you need to look no further than what we are witnessing with president-elect Mr Donald Trump and the English vote to leave the European Union.

President Trump has little understanding that American Values that crossed the Atlantic with those who sailed from Europe and Slaves from Africa to help create the USA.

Their values have stood the test of time till now.

Mrs May on the other hand carrying the cultural and historical baggage of an Empire that supplied the slaves  and is now reaping the reward of leaving the European Union’s blueprint for success which relies not only on securing economic prosperity but also on consensus on core values common to all the EU Member States.

( In the EU the original emphasis on economic development and environmental protection has been broadened and deepened to include alternative notions of development (human and social) and alternative views of nature (anthropocentric versus egocentric). Thus, the concept maintains a creative tension between a few core principles and an openness to reinterpretation and adaptation to different social and ecological contexts.

The Union is founded on the values of respect for human dignity, freedom, democracy, equality, the rule of law and respect for human rights, including the rights of persons belonging to minorities.)

She is now clasping hands with a country that is also denuding itself of core values.

Many studies have established substantial correlations between people’s values and their corresponding behaviours.

Unfortunately our troubled world is no longer affected by common values, they being manipulated by simply flooding the public with as much sound data as possible on the assumption that the truth is bound, eventually, to drown out its competitors.

If, however, the truth carries implications that threaten people’s cultural values, then… [confronting them with this data] is likely to harden their resistance and increase their willingness to support alternative arguments, no matter how lacking in evidence” (Kahan, 2010: 297).

The idea that people can be ‘nudged’ into new forms of behaviour by having their brains massaged in a certain way, is built on the premise that we are not rational beings to be engaged with. It’s very foundation is the elite’s view of us, not as people to be talked to, argued with and potentially won over, but problematic beings to be remade” (O’Neill, 2010; emphasis in original).

Values have a profound impact on a person’s motivation to express concerns about a range of bigger-than-self problems. Indeed, they are values that must be championed if we are to uncover the collective will to deal with today’s profound global challenges.

Undoubtedly these are values that have been weakened – and often even derided – in modern culture. They are not, for example, values that are fostered by treating people as if they are, above all else, consumers. 

As humans our biological tendencies push us towards both altruism and selfishness, artificial intelligence is removing any sense of common values.

While humans are capable of displays of enlightened self-interest, we cannot hope that individuals will subjugate their own self-interest to the pursuit of the greater common good. The best for which we can hope, therefore, is to exploit those instances where self-interest and the common good happen to coincide – often called ‘win-win’ scenarios.

It also seems clear to me that, in trying to meet these challenges, civil society organisations must champion some long-held (but insufficiently esteemed) values, while seeking to diminish the primacy of many values which are now prominent – at least in Western industrialised society.

Values are also shaped by people’s experience of public policies.

It is therefore crucial to ask: which values does society accentuate?

People’s motivation to engage with political process, and to demand change, is shaped importantly by their values.

Civil society organisations must strive for utmost transparency about the effect of communications and campaigns in shaping public attitudes.

Bolder leadership from both political and business leaders is necessary if proportional responses to these challenges are to emerge, but active public engagement with these problems is of crucial importance.

This is partly because of the direct material impacts of an individual’s behaviour (for example, his or her environmental footprint), partly because of lack of consumer demand for ambitious changes in business practice, and partly because of the lack of political space and pressure for governments to enact change.

This will require a change in societal values, and commitments by wealthier nations to assist others in the protection of wilderness resources of global concern.

One hundred years from now, when historians look back on this period of history, what will they think of the wilderness debate?

Will it be irrelevant to them or will it represent a vital component of a societal watershed of thought that changed the way in which society viewed itself and its relationship to Planet Earth?

Some values are mutually consistent, others tend to act to oppose one another. Activating a specific value causes changes throughout the whole system of that person’s values; in particular, it has the effect of activating compatible values and suppressing opposing values.

The implication of this is that business practice, government policy and civil society communications and campaigns must take responsibility not just for their ‘material impacts’ (what they achieve ‘on the ground’), but also for the effect they have on dominant cultural values.

It is often argued that, because a problem – climate change, for example – is of urgent concern, there ‘is not enough time’ for systemic responses.

This is a suspect argument: it seems at least as likely that appeal to ‘easy wins’ on climate change will actually serve to help defer ambitious action until it becomes “too late” for this to be taken effectively.

We must build a visual and compelling vision of low-carbon heaven.

It seems that one way in which values become strengthened is through their repeated activation.  This may occur, for example, through people’s exposure to these values through influential peers, in the media, in education, or through people’s experience of public policies.

The future is already through technology bring means that devalue that past and are, to a large extent, unconscious of the present. The Internet, the Smart Phone, artificial Intelligent Apps are all contributing to this.

This means that we value and collect more material objects. It also means we give higher priority to obtaining, maintaining and protecting our material objects than we do in developing and enjoying interpersonal relationships.

Even the gloomiest of assessments of human nature lead to the conclusion that we should be working to mitigate unhelpful aspects of our biology through cultural interventions.

This constitutes a timely opportunity to further reflect.

Man always kills the thing he loves.

In the United States, people consider it normal and right that Man should control Nature, rather than the other way around.

Up to the election of Mr Trump:  Equality was, for Americans, one of their most cherished values. This concept is so important for Americans that they have even given it a religious basis.

To prevent the silent creeping erosion of our European project it has to be more focused on essentials and on meeting the concrete expectations of its citizens. I am convinced that it is not the existence of the Union that is object to but the way it functions.

Institutions that examine power and responsibility, and audit their ethical decisions regularly, develop employees that function with honesty and integrity and serve their institution and community.

It is imperative that we appreciate that each person’s intrinsic values are different. Because values are so ingrained, we are not often aware that our responses in life are, in large part, due to the values we hold and are unique to our own culture and perspective.

What is ethically responsible is not just fixation on rules or outcomes.

Rather, it is to focus on the process and the institutions involved by making sure that there is a transparent and workable mechanism for observing and understanding the technology system as it evolves, and that relevant institutions are able to respond to what is learned rapidly and effectively.

Indeed, much of what we do today is naive and superficial, steeped in reflexive ideologies and overly rigid worldviews. But the good news is that we do know how to do better, and some of the steps we should take. It is, of course, a choice based on the values we hold as to whether we do so.

The values that must be strengthened – values that are commonly held and which can be brought to the fore – include: empathy towards those who are facing the effects of humanitarian and environmental crises, concern for future generations, and recognition that human prosperity resides in relationships – both with one another and with the natural world.

In making judgements, feelings are more important than facts.

Can you imagine big business embracing humility as a core value?

If wilderness is to exist into the future. (It is a finite resource.  It is a non-renewable resource.  It is a non-substitutable resource. It is an irreversible resource. It is a common resource.) Has the time come for us to govern ourselves? Our experience and conceptualisations are not random; they are stored in structured forms in long-term memory.

Values have been defined as psychological representations of what we believe to be important in life.

To be ethically successful, it is paramount that we understand and respect how values impact our social environment. How we perceive ourselves and operate within our environment is of such importance that institutions establish rules of ethical behavior that relate to practice.

Political leaders have profound influence over people’s deep frames, in important part through the policies that they advocate.

Values can be both activated (for example, by encouraging people to think about the importance of particular things), and they can be further strengthened, such that they become easier to activate by education which has an important impact on their value.

Afficher l'image d'origine

A final thought: We all value our own lives, it is how we conduct that life that gives value to it. It has no meaning without values.

No individual man or woman and no nation must be denied opportunity to benefit from development whether its technological or otherwise that exceeds our humanity.

A digital divide threatens us all, both rich and poor, it is also testing our values.

Are we all googling while Rome Burns.?

Technology has a multiplying power. Websites have become multi media platforms and Television stations are now media centers where the evening news broadcast is secondary to the accompanying pod casting blogging with interactive forms as Twitter, Face Book, etc.

Use them to put the flames out. Values offer focus amidst the chaos.Afficher l'image d'origine

If you got this far I value your time and comments not your like clicks.

← Back

Thank you for your response. ✨

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Share this:

  • Share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Share on Mastodon (Opens in new window) Mastodon

THE BEADY EYE ASKS: DOES HUMANITY HAVE A FUTURE BEYOND EARTH?

20 Friday Jan 2017

Posted by bobdillon33@gmail.com in Artificial Intelligence., The Future, Unanswered Questions.

≈ Comments Off on THE BEADY EYE ASKS: DOES HUMANITY HAVE A FUTURE BEYOND EARTH?

Tags

Artificial Intelligence., The Future of Mankind, Visions of the future.

 

 

How smart are we?

For 30,000 years our species has been changing remarkably quickly and we’re not done yet. We changed the world irrevocable and may soon transform ourselves as a species.Afficher l'image d'origine

The technology part sounds quite cyborg-ish, with humans developing their own QR Codes on skin (that would be kind of like DNA, I suppose), Chlorophyll skin (skin with chlorophyll) and digi-eye where the eye itself would presumably perform digital functions you would expect from a display.Afficher l'image d'origine

IF YOU’RE UNDER 45 YEARS OLD! GOD WILL BE WEARING A WHITE LAB COAT, WORKING IN A GENETIC LAB. These humans will evolve into electronic immortals. Mature in seconds, experiencing the years of angst via simulated reality in torrents of electrons.

The future of humanity is often viewed as a topic for idle speculation.

In that sense, it is hardly reasonable to think of the future of humanity as a topic: it is too big and too diverse to be addressed as a whole in a single essay, monograph, or even 100-volume book series.

One might argue that the current century, or the next few centuries, will be a critical phase for humanity and you would be right.

The first thing to notice is that the longer the time scale we are considering, the less likely it is that technological civilization will remain within the zone we termed “the human condition.”

The cumulative probability of post humanity, like that of extinction, increases monotonically over time.

Within a few centuries they will have become a new species traveling beyond the solar system.

An enterprise for posthumans—organic or inorganic.”

For example, whether and when Earth-originating life will go extinct, whether it will colonize the galaxy, whether human biology will be fundamentally transformed to make us post human, whether machine intelligence will surpass biological intelligence, whether population size will explode, and whether quality of life will radically improve or deteriorate: these are all important fundamental questions about the future of humanity.

One might believe that super intelligence will be developed within a few centuries, and that, while the creation of superi ntelligence will pose grave risks, once that creation and its immediate aftermath have been survived, the new civilization would have vastly improved survival prospects since it would be guided by super intelligent foresight and planning.

Once a human or post human civilization becomes dispersed over multiple planets and solar systems, the risk of extinction declines.

In the coming decade wearables will enable the equivalent of personalized weather forecasts for our health: 80 percent increased probability in health and happiness for you next week, based on your recent stress/sleep/social-emotional activities.

But what will human beings will look like in the future?

Afficher l'image d'origine

Our faces will have changed almost beyond recognition.

Through genetic engineering, everyone will have darker skin.

An amalgamation of evolution and genetic engineering will allow society to bend human biology to human needs. Bone-conduction devices, with embedded nanochips, will communicate with some external device for communications and entertainment.

Will we ever colonize outer space?

That depends on the definition of ‘colonize.’ If landing robots qualifies, than we’ve already done it. If it means sending microbes from Earth and having them persist and maybe grow, then, unfortunately, it’s not unlikely.

Will sex become obsolescent?

No, but having sex to conceive babies is likely to become at least much less common. In 20 to 40 years we’ll be able to derive eggs and sperm from stem cells.

We won’t need to speak since we will communicate telepathically to one another.

Our noses and ears, toes, even our chins will have disappearing.

Our brains will fit in our pockets.

We will be shorter and smaller due to overcrowding of the world.

We will have a flexible outer or exoskeleton.

We won’t be human at all but bio-robots from the future.

Perhaps it’s all bullshit. You’re surrounded by squabbling, vindictive, greedy, small-minded morons, the only thing that develops is the ability to engage mouth without engaging brain, you have to wonder if the species has hit an evolutionary cul-de-sac or is really capable of taking things further????

In every sector, human activities have proven to be altering/damaging the global eco-system at a speed so much fast than the self-adapt speed of the Earth.

If humanity goes extinct, it stays extinct.

← Back

Thank you for your response. ✨

 

 

 

 

 

 

 

 

 

 

 

Share this:

  • Share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Share on Mastodon (Opens in new window) Mastodon

THE BEADY EYE SAYS: We need to be genuinely intelligent about how humankind anticipates artificial intelligence.

19 Thursday Jan 2017

Posted by bobdillon33@gmail.com in Artificial Intelligence., Big Data., Capitalism, Emotions., Humanity., Technology, The Future, The world to day., Unanswered Questions., What Needs to change in the World, Where's the Global Outrage.

≈ Comments Off on THE BEADY EYE SAYS: We need to be genuinely intelligent about how humankind anticipates artificial intelligence.

Tags

Artificial Intelligence., Big Data, Capitalism and Greed, Globalization, SMART PHONE WORLD, Technology, The Future of Mankind, Visions of the future.

 

( Seven Minute read )

Who programs the programmers?

Soon enough, it might not be people behind the development of advanced machines learning and artificial intelligence but other AI.

This will drastically reduce the human input required.

We must not be blinded by science, nor held captive by unfounded or fantastic fears.Afficher l'image d'origineI have previously posted blogs putting the case that all technology (whether it be atomic energy or nanotechnology, bioengineering or DNA mutilation, or Artificial Intelligence) should be subject to examinations by a New World Organisation, that is totally independent and transparent.

( It’s imperative that we do not leave such examinations to the whims of the marketplace nor the cost-benefit calculations of a given quarter to marinate Artificial Intelligence into a sense of human complacency.)

I have also stated that I am pro all technology that benefits mankind as a whole. However it is critical that those individuals who are on the front lines of research be thinking about the implications of their work.

The other day on arrival at Gatwick I was admitted by an Algorithm into the UK.

Since this Algorithm was focus by definition to be based on narrowly defined problems, it got me thinking, who or what wrote the software in the first place.

The ethics of artificial intelligence are non existence.

Whether we are aware of it or not, we are already moving into the era of AI where IBM’s Watson, Google’s AI, Apple’s Siri and Amazon’s Echo will be your new companion.

Once AI can analyze a person’s affective state it will be able to influence it.

Humans are driven by emotions, making a crucial component of perception, decision-making, learning, and more. 

Artificial intelligence is not yet emotional in the same ways that humans are, but it  won’t be long with all the data collection before this is achievable to prompt certain responses and induce desired emotions. 

Creepy or worse predatory.

What happens when one of the human negotiators has an emotionally aware assistant in is the corner.     

  Every decision that mankind makes is going to be informed by a cognitive system like Watson. That future is actually much closer than you think.

To be or not to be. “Are you a robot?” “What?! No I am a real person.”

Afficher l'image d'origine

For example:

Militaries are among the intense users of high-technology, and the adoption of that equipment has transformed decision-making throughout the chain of command. The removal of human beings from the act of killing and from war.

There must be a way to ensure that Artificial Intelligence that is introduced into what ever field of Technology is not dominated by those who have a stake in the expansion of AI for Profit Sake.

There is no excuse for not being aware of the risks that such AI carries for all of us.

These questions have been with us for a long time:

Alan Turing in 1950 asked whether machines could think and that same year writer Isaac Asimov contemplated what might happen if they could in “I, Robot.” (In truth, thinking machines can be found in ancient cultures, including those of the Greeks and the Egyptians.)

About 30 years ago, James Cameron served up one dystopia created by AI in “The Terminator.” Science fiction became fact in 1997 when IBM’s chess-playing Deep Blue computer beat world champion Garry Kasparov.

As the Internet and digital systems penetrate further each day into our daily lives, concerns about artificial intelligence (AI) are intensifying.

It is difficult to get exercised about connections between the “Internet of Things” and AI when the most visible indications are Siri (Apple’s digital assistant), Google translate and smart houses, but a growing number of people, including many with a reputation for peering over the horizon, are worried.

Nevertheless, a debate about prospects and possibilities is worthwhile.

We need to ensure that boundaries are set, not just for research but for all the applications of ARTIFICIAL INTELLIGENCE.

As the Internet and digital systems penetrate further each day into our daily lives, concerns about artificial intelligence (AI) are intensifying. It is difficult to get exercised about connections between the “Internet of Things” and AI when the most visible indications are Siri (Apple’s digital assistant), Google translate and smart houses, but a growing number of people, including many with a reputation for peering over the horizon, are worried.

Recently, there has been a growing chorus of concern about the potential for AI.

It began last year when inventor Elon Musk, a man who spends considerable time on the cutting edge of technology, warned that with AI “we’re summoning the demon.” In all those stories with the guy with the pentagram and the holy water, and he’s sure he can control the demon. It doesn’t work out.” For him, AI is an existential threat to humanity, more dangerous than nuclear weapons.

The possibilities created by “big data” are driving increasing automation and in some cases AI in the office environment.  Legal and administrative frameworks to deal with the proliferation of these technologies and AI have not kept pace with their application. Ethical questions are often not even part of the discussion.

And since their focus tends to be on narrowly defined problems, others who can address larger issues should join the discussion. This process should be occurring for all such technologies.

A month later, distinguished scientist Stephen Hawking told the BBC that he feared that the development of “full artificial intelligence” could bring an end to the human race. Not today, of course, but over time, machines could become both more intelligent and physically stronger than human beings. Last month, Microsoft founder Bill Gates joined the group, saying that he did not understand people who were not troubled by the prospect of AI escaping human control.

More recently, Google’s AlphaGo software beat South Korean Go champion Lee Sedol in series of matches pitting human against software in a board game that apparently has more possible positions than there are atoms in the universe.

What’s more amazing about Alpha Go, unlike Deep Blue before it, was that it was not specifically programmed to play Go – it learned to play the game using a general-purpose algorithm.

The big question is what can be done? If anything, or is it to late.

None of the darker visions have deterred researchers and entrepreneurs from pursuing the field. It is hard to fear AI when the simplest demonstrations are more humorous than hair-raising.

The prevailing view among software engineers, who are writing the programs that make AI possible, is that they remain in control of what they program.

But are they really? I think not.

The prevailing view among software engineers, who are writing the programs that make AI possible, is that they remain in control of what they program.

Even if true AI is a far-off prospect, ethical issues are emerging every day.

Artificial intelligence or AI is now getting a foothold in people’s homes, starting with the Amazon devices like its Echo speaker which links to a personal assistant “Alexa” to answer questions and control connected devices such as appliances or light bulbs. Echo’s main advantage is that it connects to Amazon’s range of products and services telling devices to tend to tasks such as ordering goods, checking traffic, making restaurant reservations or searching for information. It also connects to various third-party services like Uber and Domino’s Pizza, so you can just call for a car or a pizza delivery by just telling the Echo what you want.

IBM, whose Watson supercomputer systems are offering “cognitive health” programs which can analyze a person’s genome and offer personalized treatment for cancer, for example.

Google recently announced it had developed an algorithm which can detect diabetic retinopathy, a cause of blindness, by analyzing retina images.

Amazon is seeking to put AI to work in the supermarket—testing a system without cash registers or lines, where consumers simply grab their products and go, and have a bill tallied by artificial intelligence.

Facebook just recently introduced its AI-based Deep Text analytics engine which is said to be able to scan and understand the textual content of thousands of posts per second in more than 20 languages, all with nearly human-like accuracy.

Machine learning is already being used extensively in the social networking site to make sense of and translate some two billion News Feed items per day and the company is planning to use AI to recognise images and allow users to search for photos based on the content in those photos.

The artificial intelligence (AI) component in these programs aims to make create a world in which everyone can have a virtual aide that gets to know them better with each interaction.

AI prowess to make smartphones smarter—Google Allo messenger can, for example, suggest a meeting or deliver relevant information during a conversation. To infuse smartphones and other internet-linked devices with software smarts that help them think like people.

The prospect of AI escaping human control is advancing day by day.

Researchers most deeply engaged in this work are more sanguine. The head of Microsoft Research dismissed Gates’ concern, saying he does not think that humankind will lose control of “certain kinds of intelligences.” He instead is focused on ways that AI will increase human productivity and better lives.

At what cost?

No Algorithm understand the unwritten social behaviors used in daily life, which can vary from one culture to another. More work needs to be done to improve “social intelligence,” or understanding the subtleties of our everyday decisions.

However, the real question on everybody’s minds is – is the rush to get to true AI another step towards Skynet, Terminators and HAL 9000?

Just ponder on this for a moment – if a computer could truly be “smart”, it would soon see that humans are basically the cause of most environmental problems and would come up with an extinction solution that would solve all issues in one fell swoop.

Humans are limited by slow biological evolution and would not be able to compete with software that can redesign itself and evolve faster than any human could.

So what is there to prevent AI from gaining sentience and killing us all?Afficher l'image d'origine

How one can manage something that is sentient is another question altogether.

As we already have industrial robots replacing us in tiresome and repetitive jobs, we might ask ourselves if they’re not going to replace us in all domains?

The population with mobile devices now outnumbering and multiplying faster than humans.

AI and automation provide an opportunity to move beyond business as usual. The global affective computing market is estimated to be 9.3. billion $ a year. By 2020 it will be in the region of 50 billion.

It’s no wonder that the darker visions have not deterred researchers and entrepreneurs from pursuing the field.

We need to remain vigilant on the uses and changes of AI, and maybe even prepare ourselves for a new world where a good part of normal, information research work will die out.

Let’s hope that, should this happen, it will be to the benefit of creative arts which remain entirely ours.

We might already be in the midst of creating a conscious entity of a whole new “utterly inhuman” kind.  Now that would be scary.Afficher l'image d'origine

Perhaps the only solution is a whistle-blower Algorithm.

All comments welcome, all like clicks chucked in the Bin.  

← Back

Thank you for your response. ✨

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Share this:

  • Share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Share on Mastodon (Opens in new window) Mastodon
← Older posts
Newer posts →

All comments and contributions much appreciated

  • THE BEADY EYE ASKS. HAVE YOU EVER WONDERED OR ASKED YOUR SELF. WHERE OR WHY IS THE WORLD IN SUCH A MESS. March 23, 2026
  • THE BEADY EYE SAYS. HAVE YOU NOTICED THAT THE NEWS COVERAGE ON THE WAR IN THE MIDDLE EAST IS DOMINATING BY MATERIALISM. March 21, 2026
  • THE BEADY EYE SAYS AMERICA IS SHOOTING ITS SELF (NOT JUST IN THE FOOT) BUT IN THE EYES OF ITS ALLIES AND THE WORLD MARKET PLACES. AS THE IRAN WAR IS SPIRALLING OUT OF CONTROL. March 20, 2026
  • THE BEADY EYE SAYS. THE BATTLE TO HAVE A LIFE WORTH LIVING BECOMES MORE AND MORE DIFFICULT WITH AGE .. COMMUNITY MATTERS MORE THAN MONEY. March 20, 2026
  • THE BEADY EYE SAYS. EQUALITY, FAIRNESS, JUSTICE ARE INDIVISIBLE CONCEPTS IF ARE ANYTHING. March 18, 2026

Archives

  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013

Talk to me.

Jason Lawrence's avatarJason Lawrence on THE BEADY EYE ASK’S: WIT…
benmadigan's avatarbenmadigan on THE BEADY EYE ASK’S: WHA…
bobdillon33@gmail.com's avatarbobdillon33@gmail.co… on THE BEADY EYE SAYS: WELCOME TO…
Ernest Harben's avatarOG on THE BEADY EYE SAYS: WELCOME TO…
benmadigan's avatarbenmadigan on THE BEADY EYE SAY’S. ONC…

7/7

Moulin de Labarde 46300
Gourdon Lot France
0565416842
Before 6pm.

My Blog; THE BEADY EYE.

My Blog; THE BEADY EYE.
bobdillon33@gmail.com

bobdillon33@gmail.com

Free Thinker.

View Full Profile →

Follow bobdillon33blog on WordPress.com

Blog Stats

  • 97,909 hits

Blogs I Follow

  • unnecessary news from earth
  • The Invictus Soul
  • WordPress.com News
  • WestDeltaGirl's Blog
  • The PPJ Gazette
Follow bobdillon33blog on WordPress.com
Follow bobdillon33blog on WordPress.com

The Beady Eye.

The Beady Eye.
Follow bobdillon33blog on WordPress.com

Blog at WordPress.com.

unnecessary news from earth

WITH MIGO

The Invictus Soul

The only thing worse than being 'blind' is having a Sight but no Vision

WordPress.com News

The latest news on WordPress.com and the WordPress community.

WestDeltaGirl's Blog

Sharing vegetarian and vegan recipes and food ideas

The PPJ Gazette

PPJ Gazette copyright ©

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy
  • Subscribe Subscribed
    • bobdillon33blog
    • Join 222 other subscribers
    • Already have a WordPress.com account? Log in now.
    • bobdillon33blog
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar