Human brains are the product of blind and unguided evolution, therefore one day hit a hard limit – and may already have done so.

So a population of human brains is much smarter than any individual brain in isolation.

But does this argument really hold up?

Can our puny brains really answer all conceivable questions and understand all problems?

What made our species unique, is that we were capable of culture, in particular cumulative cultural knowledge. With the arrival of Artificial Intelligence this applies as we now have Apps that select what we hear, see and believe to be true.

Considering that human brains did not evolve to discover their own origins either, and yet somehow we managed to do just that. Perhaps the pessimists are missing something.

It is right that our brains are simply not equipped to solve certain problems, there is no point in even trying, as they will continue to baffle and bewilder us. Assuming we could even agree on a definition of “truth,” the list of reasons we can’t or don’t wish to know the truth would be quite long and well beyond the scope of this blog post.

We all know that we are destroying the planet we all live on. One of the reasons that we have difficulties with perceiving this truth, is with seeing reality, has to do with the purpose of truth.

The purpose of truth is rooted in the purpose of life itself. Truth isn’t desirous for its own sake, it serves a higher master than AI.

Our minds evolved by natural selection to solve problems that were life and death matters to our ancestors, not to commune with correctness.

Our ancestors needed to be able to discriminate friend from foe, healthy from unhealthy, and safe from dangerous (e.g., “It is good to eat this and bad to eat that.”).

Within an evolutionary framework, ignorance of what is true or real could be dangerous or deadly.

In order to survive, it was critical for our ancestors to learn to make predictions based on available information. It motivates them to move from a state of not knowing to knowing.

Thus, our ancestors didn’t need to see the world for what it really was. They just needed to know enough to help them survive. For example, the world looks flat. It looks like the sun rises in the sky and is a relatively small object. Our eyes (or our brains) deceive us though. The Earth, as well as other planets, are roughly spherical in shape. A million Earths could fit inside the Sun, and it is 93,000,000 miles away from us.

If our ancestors had no need to understand the wider cosmos in order to spread their genes, why would natural selection have given us the brainpower to do so?

At some point, human inquiry will suddenly slam into a metaphorical brick wall, after which we will be forever condemned to stare in blank incomprehension.

We will never find the true scientific theory of some aspect of reality, or alternatively, that we may well find this theory but will never truly comprehend it?

No one has a clue what this means.

To day, why is it that some cannot accept the Truth?

Truth is something we have to face now or after some time..

I think its mostly because of the fear of having to accept it, face it and deal with it, even though it may contradict what one might already believe.

A person’s belief system is built on a foundation. If the facts are outside of the foundation and cannot be supported by it, the person may not believe it, or remain very sceptical about it.

Lets take a few examples.

The past:

The Holocaust:87b84f9d 6d7f 48fe 8c90 530d980936bc

While no master list of those who perished in the Holocaust exists anywhere in the world. The shelves of the Hall of Names at Yad Vashem contain four million pages of testimony in which survivors and families have contributed information, but for those who were never known, there can be no record.

Towards the end of the war thousands of Hungarians Jews could have being saved if the the railways were bomb.

They were not because the reports of what was happing were not believed.

The Future.

An Asteroid or Meteorite heading towards earth.  Most of us would have no comprehension of such an event and would probably not believe it to be true.

The present:

This talk about man-made climate change.

People have been predicting catastrophic events for the last hundred years or so. None of them have happened, so people have a hard time believing new predictions.


Today, fewer and fewer people understand what is going on at the cutting edge of theoretical physics – even physicists.

The unification of quantum mechanics and relativity theory will undoubtedly be exceptionally daunting, or else scientists would have nailed it long ago already.

The same is true for our understanding of how the human brain gives rise to consciousness, meaning and intentionality.

But is there any good reason to suppose that these problems will forever remain out of reach? Or that our sense of bafflement when thinking of them will never diminish?

Who knows what other mind-extending devices we will hit upon to overcome our biological limitations?  Biology is not destiny.

As soon as you frame a question that you claim we will never be able to answer, you set in motion the very process that might well prove you wrong: you raise a topic of investigation.

With all the data that is at our disposal theses days, Truth is analysed by Algorithms and self learning software programs.

The data-driven revolution is prefaced upon the idea that data and algorithms can lead us away from biased human judgement towards pristine mathematical perfection that captures the world as it is rather than the world biased humans would like.

Truths that do not always align with our values. “Truth” told by data with the preordained outcome they desire.

Getty Images

Algorithms And Data Construct ‘Truth,’ Not Discover It.

There is no such thing as perfect data or perfect algorithms.

All datasets and the tools used to examine them represent trade-offs. Each dataset represents a constructed reality of the phenomena it is intended to measure. In turn, the algorithms used to analyse it construct yet more realities.

In short, a data scientist can arrive at any desired conclusion simply by selecting the dataset, algorithm, filters and settings to match.(filistimlyanin/iStock.com)

It is more imperative than ever, that society recognizes that data does not equate to truth.

The same dataset fed into the same algorithm can yield polar opposite results depending on the data filters and algorithmic settings chosen.

But the important thing to note about these unknown unknowns is that nothing can be said about them.

The basic premise of the data-driven revolution in bringing quantitative certainty to decision-making is a false narrative.

To presume from the outset that some unknown unknowns will always remain unknown, is not modesty – it’s arrogance.

There’s always a human strategy behind using algorithms.

The exact details of how they works are often incomprehensible. Is this what we really want?

I think we need more transparency about how algorithms work, and how owns and operated them.

The problem with this is that demanding full transparency will have an adverse effect on the self-learning capacity of the algorithm. This is something that needs to be weighed up very carefully indeed.

There are certainly causes for concern but the need for regulations as profit seeking algorithms are plundering what is left of our values.  

If not regulated, I think that we’ll also see lots more legal constructions determining what we can and cannot do with algorithms.


Algorithms are aimed at optimizing everything.

They can save lives, make things easier and conquer chaos.

Artificial intelligence (AI) is naught but algorithms.

The material people see on social media is brought to them by algorithms.

In fact, everything people see and do on the web is a product of algorithms. Every time someone sorts a column in a spreadsheet, algorithms are at play, and most financial transactions today are accomplished by algorithms. Algorithms help gadgets respond to voice commands, recognize faces, sort photos and build and drive cars. Hacking, cyberattacks and cryptographic code-breaking exploit algorithms.

They are mostly invisible aids, augmenting human lives in increasingly incredible ways. However, sometimes the application of algorithms created with good intentions leads to unintended consequences.

We have already turned our world over to machine learning and algorithms.

Algorithms will continue to spread everywhere becoming  the new arbiters of human decision-making.

The question now is, how to better understand and manage what we have done?

The main negative changes come down to a simple but now quite difficult question:

How are we thinking and what does it mean to think through algorithms to mediate our world?

How can we see, and fully understand the implications of, the algorithms programmed into everyday actions and decisions?

The rub is this: Whose intelligence is it, anyway?

By expanding collection and analysis of data and the resulting application of this information, a layer of intelligence or thinking manipulation is added to processes and objects that previously did not have that layer.

So prediction possibilities follow us around like a pet.

The result: As information tools and predictive dynamics are more widely adopted, our lives will be increasingly affected by their inherent conclusions and the narratives they spawn.

Our algorithms are now redefining what we think, how we think and what we know. We need to ask them to think about their thinking – to look out for pitfalls and inherent biases before those are baked in and harder to remove.

Advances in algorithms are allowing technology corporations and governments to gather, store, sort and analyse massive data sets.

This is creating a flawed, logic-driven society and that as the process evolves – that is, as algorithms begin to write the algorithms – humans may get left out of the loop, letting “the robots decide.”

Dehumanization has now spread to our, our economic systems, our  health care and social services.

We simply can’t capture every data element that represents the vastness of a person and that person’s needs, wants, hopes, desires.

Who is collecting what data points?

Do the human beings the data points reflect even know or did they just agree to the terms of service because they had no real choice?

Who is making money from the data?

How is anyone to know how his/her data is being massaged and for what purposes to justify what ends?

There is no transparency, and oversight is a farce. It’s all hidden from view.

I will always remain convinced the data will be used to enrich and/or protect others and not the individual. It’s the basic nature of the economic system in which we live.

It will take us some time to develop the wisdom and the ethics to understand and direct this power. In the meantime, we honestly don’t know how well or safely it is being applied.

The first and most important step is to develop better social awareness of who, how, and where it is being applied.”

If we use machine learning models rigorously, they will make things better; if we use them to paper over injustice with the veneer of machine empiricism, it will be worse.

The danger in increased reliance on algorithms is that is that the decision-making process becomes oracular: opaque yet unarguable.

If we are to protect the TRUTH. Giving more control to the user seems highly advisable.

When you remove the humanity from a system where people are included, they become victims.

Advances in quantum computing and the rapid evolution of AI and AI agents embedded in systems and devices in the Internet of Things will lead to hyper-stalking, influencing and shaping of voters, and hyper-personalized ads, and will create new ways to misrepresent reality and perpetuate falsehoods to the point of no return.

All human comments appreciated. All like clicks and abuse chucked in the bin.

Contact: bobdillon33@gmail.com