Tags

, , , , , , ,

( Six minute read)

How many times have you heard someone say “There’s an app for that.”

Every time you pick up your smartphone, you’re summoning algorithms.

They have become a core part of modern society.

They’re used in all kinds of processes, on and offline, from helping value your home to teaching your robot vacuum to steer clear of your dog’s poop. They’ve increasingly been entrusted with life-altering decisions, such as helping decide who to arrest, who to elect amd who should be released from jail, and who’s is approved for a home loan.

Recent years have seen a spate of innovations in algorithmic processing, from the arrival of powerful language models like GPT-3, to the proliferation of facial recognition technology in commercial and consumer apps. At their heart, they work out what you’re interested in and then give you more of it – using as many data points as they can get their hands on, and they aren’t just on our phones:

At this point, they are responsible for making decisions about pretty much every aspect of our lives.

The consequences can be disastrous and will be, because with AI they are creating themselves. It’s not that the worker gets replaced by just a robot or a machine, but to somebody else who knows how to use AI.

While we can interrogate our own decisions, those made by machines have become increasingly enigmatic.

They can amplify harmful biases that lead to discriminatory decisions or unfair outcomes that reinforce inequalities. They can be used to mislead consumers and distort competition. Further, the opaque and complex nature by which they collect and process large volumes of personal data can put people’s privacy rights in jeopardy.

Currently there are little or no rules/Laws for how companies can or can’t use algorithms in general, or those that harness AI in particular.

Adaptive algorithms have been linked to terrorist attacks and beneficial social movements.

So it’s not to far fetched to say:  That personalised AI is driving people toward self-reinforcing echo chambers of extremism, or to advocate that it is possible that someone could ask AI to create a virus, or an alternative to money.

Where is this all going to end up?

A conscious robot faking emotions – like Sorrow – Joy – Sadness – Pain- and the rest, that wants to bond with you.

———————————

It all depends on what you think consciousness is.

Yes a robot could be a thousand time more intelligent than a human, with the question becoming in essence, does any kind of subjective experiences become a consciousness experience. If so the subjective feeling of consciousness is an illusion created by brain processes, that a machine replicates and such a process would be conscious in the way that we are.

At the moment machines with minds are mainstays of science fiction.

Indeed, the concept of a machine with a subjective experience of the world and a first-person view of itself goes against the grain of mainstream AI research. It collides with questions about the nature of consciousness and self—things we still don’t entirely understand.

Even imagining such a machine’s existence raises serious ethical questions that we may never be able to answer. What rights would such a being have, and how might we safeguard them?

It is a machine that thinks and believes it has consciousness how we would know if one were conscious.

Perhaps you can understand, in principle, how the machine is processing information and there are who  are confirmable with that. However an important feature of a learning machine is that its teacher will often be very largely ignorant of quite what is going on inside and has no way of knowing if conscious exists.

And yet, while conscious machines may still be mythical, their very possibility shapes how we think about the machines we are building today.

Can machines think?

——————-

They’re used for everything from recognizing your voice face listening to your heart, arranging your life.

All kinds of things can be algorithms, and they’re not confined to computers with the impact of potential new legislation to limit the influence of algorithms on our lives remaining unclear.

There’s often little more than a basic explanation from tech companies on how their algorithmic systems work and what they’re used for. Take Meta, the company formerly known as Facebook, has come under scrutiny for tweaking its algorithms in a way that helped incentivize more negative content on the world’s largest social network.

Laws for algorithmic transparency are necessary before specific usages and applications of AI can be regulated.  When it comes to addressing these risks, regulators have a variety of options available, such as producing instructive guidance, undertaking enforcement activity and, where necessary, issuing financial penalties for unlawful conduct and mandating new practices.

We need to force large Internet companies such as Google, Meta, TikTok and others to “give users the option to engage with a platform without being manipulated by algorithms driven by user-specific data in order to shape and manipulate users’ experiences — and give consumers the choice to flip it on or off.

It will inevitably affect others such as Spotify and Netflix that depend deeply on algorithmically-driven curation.

We live in an unfair world, so any model you make is going to be unfair in one way or another.

For example, there have been concerns about whether the data going into facial-recognition technology can make the algorithm racist, not to mention what makes military drones to kill.

—————

Going forward there are a number of potential areas we could focus on, and, of these, transparency and fairness have been shown to be particularly significant.

Artificial Intelligence as a Driving Force for the Economy and Society and Wars.

In some cases this lack of transparency may make it more difficult for people to exercise their rights – including those under the GDPR. It may also mean algorithmic systems face insufficient scrutiny in some areas (for example from the public, the media and researchers).The 10 Most Important AI Trends For 2024 Everyone Must Be Ready For Now

While legislators scratch their heads over-regulating it,the speed at which artificial intelligence (AI) evolves and integrates into our lives is only going to increase in 2024. Legislators have never been great at keeping pace with technology, but the obviously game-changing nature of AI is starting to make them sit up and take note.

The next generation of generative AI tools will go far beyond the chatbots and image generators becoming more powerful.  We will start to see them embedded into creative platforms and productivity tools, such as generative design tools and voice synthesizers.

Being able to tell the difference between the real and the computer-generated will become an increasingly valuable tool in the critical skills toolbox!

AI ethicists will be increasingly in demand as businesses seem to demonstrate that they are adhering to ethical standards and deploying appropriate safeguards.

95 percent of customer service leaders expect their customers will be served by AI bots at some point in the next three years. Doctors will use it to assist them in writing up patient notes or medical images. Coders will use it to speed up writing software and to test and debug their output.

40% of employment globally is exposed to AI, which rises to 60% in advanced economies.

An example is Adobe’s integration of generative AI into its Firefly design tools, trained entirely on proprietary data, to alleviate fears that copyright and ownership could be a problem in the future.

Quantum computing – capable of massively speeding up certain calculation-heavy compute workloads – is increasingly being found to have applications in AI.

AI can solve really hard, aspirational problems, that people maybe are not capable of solving” such as health, agriculture and climate change,

We need to bridge the gap between AI’s potential and its practical application and whether technology would affect what it means to be human.

They are already creating a two tier world, of the have and have not, driving inequality to a deep human value of authenticity and presence.

Will new technologies lead us, or are they already leading us and our children to confuse virtual communities and human connection for the real thing?

Generative AI presents a future where creativity and technology are more closely linked than ever before. If they do, then we may lose something precious about what it means to be human.

How can we ensure equal access to the technology?

If we look to A.I. as a happiness provider, we will only create a greater imbalance than we already have.

If AI Algorithms run the world there will be no time off.

Humans are now hackable animals, so AI might save us from ourselves.

AI will become the only thing that understands these embedded systems is scary.

General AI may completely up-end even the contemplation of reason. Not only will “resistance be futile”, it could become inconceivable for a dumbfounded majority.

One thing is certain, in about a hundred years we will have an idea of what makes us different and more intelligent than computers, but dont worry, AI has the potential to change education and the way we learn.

All human comments appreciated. All like clicks and abuse chucked in the bin.

Contact; bobdillon33@gmail.com