Tags

, , ,

( Seven minute read)

Yes you have a smart phone and yes there are massive amounts of money being invested in AI, but few of us understand what AI is doing or going to do to society.

Humans draw on norms, social expectations, emotional responses and culturally shaped intuitions about harm and fairness to think about morality.

Artificial intelligence has none of these.

Large language models can often match human responses but for reasons that bear no resemblance to human reasoning.

Unfortunately there is no reason and no way that a human mind can keep up with any artificial intelligence model.

When a generative AI system does something, we have no idea, at a specific or precise level, why it makes the choices it does.

This is not the AI we want.

It can and will destabilise trust not just in the capabilities of the model but in humans as a whole.

Its possible forthcoming progeny, artificial general intelligence — in ways that would benefit humanity, but also with the “tantalizing possibility” that researchers can finally figure out AI interpretability,

The moment you learned what their knowledge is actually made of—patterns in text rather than contact with the world—something essential like truth and trust without total transparency and accountability dissolved and disappeared.

We know that these large language models (LLMs) are imitating an understanding of the world that they don’t actually have—even if their fluency can make that easy to forget.

The model is not actually imagining anything or engaging in any deliberation, just reproducing patterns in how people talk or write about these counterfactuals.  Not an understanding of how events actually produce outcomes in the world.

Where a human evaluates, a model predicts. 

When a human engages with the world, a model engages with a distribution of words.

It does not give them access to the world those words refer to.

And yet, because human judgments are also expressed through language, the model’s answers often end up resembling human answers on the surface.

This gap between what models seem to be doing is becoming indistinguishable, to the observer, from knowledge itself.

The model cannot know when it is hallucinating, because it cannot represent truth in the first place. It cannot form beliefs, revise them or check its output against the world.

It cannot distinguish a reliable claim from an unreliable one except by analogy to prior linguistic patterns.

In short, it cannot do what judgment is fundamentally for.

People are already using these systems in contexts in which it is necessary to distinguish between plausibility and truth, such as law, medicine and psychology.

A model can generate a paragraph that sounds like a diagnosis, a legal analysis or a moral argument.

When we ask them to judge, we quietly redefine what judgment becomes—shifting it from a relationship between a mind and the world to a relationship between a prompt and a probability distribution.

So where are we in this new evolution?

We must treat large language models as sophisticated linguistic instruments that require human oversight precisely because they lack access to the domain that judgment ultimately depends on: the world itself.

Once artificial intelligence in what ever form it takes starts taking decisions without human intervention supervision and ability to be shuttled we human will become collateral damage.,

All human comments appreciated. All like clicks and abuse chucked in the bin.

Contact: bobdillon33@gmail.com

https://youtu.be/1g7YfskW_Fo?si=xotnucAbLRmimXBu

https://youtu.be/sNLbwWOo6UY?si=8bWL9P-PNUah3DvZ