( A Thirty-minute read)
Do you have a right to believe what you want?
Yes, of course, but we now live in an Algorithmic driven world that is blurring the boundaries and amplifying the social tensions that are festering under the surface.
The problem is that we are allowing the building of technologies, that are making consequential decisions about people’s lives.
AI is shaping people’s lives on a daily basis, but it’s an open question whether AI will become a trusted advisor or even a corrupting force.
It’s not COVID-19 that will kill us all its Profit-seeking algorithms.
However, here in this post, my main concern is whether the AI techniques will develop into quantum algorithms that will be totally out of control.
If artificial general intelligence is on the not too distant horizon, surely we should be ensuring that it is not owned by anyone corporation and that at its core it respects our core values.
To achieve this we cannot surely let wealth be concentrated in fewer and fewer hands, or to be let to the marketplace, or any world organization that is not totally transparent and self-financing.
We therefore as a matter of grave urgency need a new world organization that vets all technology, and algorithms. (See previous posts)
As long as the ALGORITHMS don’t go to war with each other and cause something even more difficult to diagnose than a crash on the stock markets they are safe is as naive as saying ” It’s going to be Great.”
AlGORITHMS are increasingly in charge of a world that is precious to us all.
Basically, we’re entering the era of machines controlling everything.
If we want to create new different societies with human dignity for all we need to do something about it.
The difficulty of predicting the future is not just a cliche, it’s a basic fact of our existence. Part of the hypothesis of Singularity is that this difficulty is just going to get worse and worse. Yes, creating AGI ( Artificial General Intelligence) is a big and difficult goal, but according to known science, it is almost surely an achievable one.
However, there are sound though not absolutely confident arguments that it may well be achievable within our lifetimes.
If artificial general intelligence is on the not too distant horizon, surely we should be ensuring that it is not owned by anyone corporation and that at its core it respects our core values.
If we think in months we focus on immediate problems such as the present-day wars, the Covid crisis, the Donald Trumps, the economy, if we think in decades, climate, growing inequality, the loss of jobs to automation are all presenting dangers. But if we look at life in total, science is converging on data processing and AI that is developing itself with algorithms.
When intelligence is approached in an incremental manner, with strict reliance on interfacing to the real world through perception and action, reliance on representation disappears.
It won’t be long before we will not be unable to distinguish the real world from the virtual world.
Since there is only one real world and there can be infinite virtual worlds the probability that you will inhabit this sole world is zero.
So it won’t matter whether computers will be conscious or not.
Is starting to feel like it’s every man for himself, Is possible that right now, a global crisis is upon us, Without even knowing… And the virus may not be the biggest threat, but the crisis that follows, Everyday goods that keep us alive will be gone, I’m talking, food, freshwater, medicine, clothes, fuel…Intelligence is decoupling from consciousness and soon rather than later it will be consigned to Google, Facebook, Twitter, Smartphones, and the like to make decisions that are not possible to reverse.
You might think that the above is stupid but it won’t be long before we will be witnessing the most unequal societies in history.
——————————
We humans will soon be living with robots that process data without any subjective experiences or consciousness or moral opprobrium.
As we watch robots, autonomous vehicles, artificial intelligence machines, and the like slowly (and sometimes rapidly) permeate our world, it’s not hard to imagine them going from permeating to taking over.
Algorithms are increasingly determining our collective future.
It will only matter what they think about you.
We are already halfway towards a world where algorithms run everything.
This is why many of the issues raised in this post will require close monitoring, to ensure that the oversight of machine learning-driven algorithms continues to strike an appropriate and safe balance between recognizing the benefits (for healthcare and other public services, for example, and for innovation in the private sector) and the risks (for privacy and consent, data security and any unacceptable impacts on individuals).
——————————WHAT CAN GOVERNMENTS DO?
Please regulate AI, this is too dangerous.
Given the international nature of digital innovation, governments, should establish audits of algorithms, introducing certification of algorithms, and charging ethics boards with oversight of algorithmic decisions.
Why?
They are bringing big changes in their wake.
From better medical diagnoses to driverless cars, and within central governments where there are opportunities to make public services more effective and achieve long-term cost savings.
However, the Government should produce, publish, and maintain a list of where algorithms with significant impacts are being used within the Central Government, along with projects underway or planned for public service algorithms, to aid not just private sector involvement but also transparency.
Governments should not just simply accept what the developers of algorithms offer in return for data access.
To this end, Governments should be at the forefront of the creation of a “statutory building code”, which describes mandatory safety and quality requirements for digital platforms.
Social networks should be required by law to release details of their algorithms and core functions to trusted researchers, in order for the technology to be vetted.
This Law should enable the enforcement of,
forcing social networks to disclose in the news feed why content has been recommended to a user.
limiting the use of micro-targeting advertising messages.
making it illegal to exclude people from content on the basis of race or religion, such as hiding a spare room advert from people of color.
banning the use of so-called dark patterns – user interfaces designed to confuse or frustrate the user, such as making it hard to delete your account.
labeling the accounts of state-controlled news organizations.
limiting how many times messages can be forwarded to large groups, as Facebook does on WhatsApp.
If we took the premise that people should have a lawful right to be manipulated and deceived, we wouldn’t have rules on fraud or undue influence.
———————————–To days Algorithms and where we are.
As data accumulates, even more so now with Covid- 19 track and trace, and now working from home we have more centralized data depositories and large centralized AI models that work off centralized or decentralized data.
How does the concentration of power affect this balance that impinges on individual liberty?
Our democratic institutions and public discourse are underpinned by an assumption that we can at least agree on things that are true.
Facebook, Twitter, and YouTube create algorithms that promote and highlight information. That is an active engineering decision. Regardless of whether Facebook, Twitter profits from hate or not, it is a harmful by-product of the current design and there are social harms that come from this business model.
Platforms that monetize user engagement have a duty to their users to make at least a minimum effort to prevent clearly identified harms.
We have to focus on the responsibility of platforms.
Because people are being manipulated with objectively false information, there has to be some kind of accountability for platforms.
Currently, these platforms are not neutral environments they have no common understanding that there are certain things that are manifestly true with algorithms making decisions about what people see or do not see.
In most Western democracies, you do have the freedom of speech.
But freedom of speech is not an entitlement to reach. You are free to say what you want, within the confines of hate speech, libel law, and so on. But you are not entitled to have your voice artificially amplified by technology.
The way Facebook and other platforms approach this problem is:
We’ll wait and see and figure out a problem when it emerges. Every other industry has to have minimum safety standards and consider the risks that could be posed to people, through risk mitigation and prevention.
There are right now some objectively disprovable things spreading quite rapidly on Facebook. For example, that Covid does not exist and that the vaccine is actually to control the minds of people. These are all things that are manifestly untrue, and you can prove that.
However, algorithms are much more prevalent than that- the Apple Face ID algorithm decides whether you are who you say you are.
Algorithms limit people’s worldview, which can allow large population groups to be easily controlled. Social Media algorithms tuned to your desires and want’s ensures that everything on your feed will be of interest to you without you knowing what data these algorithms use and what they aim for.
Conclusion.
We are already living with large AI platforms that are monopolizing the fruits of globalization with billions being left behind.
With us accepting this as if natural.
It will be too late when we are asking ourselves. What’s more valuable – intelligence or consciousness?Then ask yourselves what happens to society, politics, and daily life when non-conscious but highly intelligent algorithms know us better than we know ourselves?
Whatever view one takes on artificial intelligence ethics.You can rest assured that we will see far more nut cases blowing themselves up, far more wars over finite resources, with vast movements of people.
We have to remember that self-regulation is not the same as having no regulation.
Of course, the loudest arguments for and against something often have one thing in common. They are often made by people with no desire to compromise or understand the other side.
I think self-regulation, in and of itself contemplates people in power, deciding how they will act.
We have to accept from history that we cannot possibly predict all adverse consequences of technology and that’s because it is not just technology that has adverse consequences, but the context in which is applied,
It is impossible to regulate AI while thinking about all of its potential adverse consequences. The seeds for harm at the design stage, or at the development stage, or at the deployment stage.
We don’t have to wait for the technology to become an application before we think of regulating it effectively.
There is a need to strengthen specific provisions to safeguard individual liberty and community rights when it comes to inferred data. There is a need to balance the trade-offs between the utility of AI and protecting privacy and data.
Self-regulation within the AI industry may not be enough since it may not solve the massive differential between the people developing the technology and the people affected by it. Machine learning is the next step that they are aiming for, with the algorithms deciding the input and outputcompletely.
Inherent political and economic power hierarchies between the state and citizens and within the private sector need to be addressed because the promise of globalization is a lie when it comes to AI and prosperity for all.
Algorithms are being used in an ever-growing number of areas, in ever-increasing ways, however, like humans, they can produce bias in their results, even if unintentional. We are all becoming redundant with biotechnology becoming only available to the riches of us.
I don’t think that AI per se can be regulated because today it is AI, tomorrow it will be Augmented Reality or Virtual Reality, and the day after tomorrow it may be something that we can’t even think of right now.
So it is important to have checks and balances in the use and access to AI that go beyond just technological means.
Why?
Because they are also moving into areas where the benefits to those applying them may not be matched by the benefits to those subject to their ‘decisions’—in some aspects of the criminal justice system, for example.
However, technology companies are not all the same, and nor is technology the only part of the media ecosystem.
It is essential to ensure a whole society response to tackle these important issues.
You could require algorithms to have a trigger TO SHUT OF – to stop misinformation or terrorist groups using social media as a recruiting platform.
BUT who defines what counts as misinformation?
It is no longer possible for humans to fact-check so the only course of action is a world Independent Universal Algorithm that is designed to establish fairness.
While “fairness” is much vaguer than “life or death,” I believe it can – and should – be built into all AI using their algorithm.
Therefore every Social network should display a correction to every single person who was exposed to misinformation if independent fact-checkers identify a story as false.
(Google’s search algorithm is more closely guarded than classified secret documents with Google Algorithm’s that now owns most of the largest data sets in the world stored in its cloud.)
——————–
We now have algorithms fighting with each other for supremacy on the market, prey on other algorithms in order to blunder the world exchanges for profit to such an extent that they now effectively in control of capitalism. Take for instance, when someone says algorithmic trading, it covers a vast subject not just buying and selling large volumes of shares automatically at very high speeds by unsupervised learning algorithms.
There are four major types of trading algorithms.
There are:
Execution algorithms
Behavior exploitative algorithms
Scalping algorithms
Predictive algorithms
Transparency must be a key underpinning for algorithm accountability.
Why?
Because it will make it easier for the decisions produced by algorithms to be explained.
(The ‘right to explanation’ is a key part of achieving accountability and tackling the ethical implications around AI.)
We are only on the outskirts of mind science that presently knows little about how the mind works never mind consciousness. We have no idea how a collection of electric brain signals creates subjective experiences however we are conscious of our dreams.
99% of our bodily activities take place without any conscious feelings.
As neuroscientists acquired more and more data about the workings of the brain, cognitive sciences, and their stated purpose is to combine the data from numerous disciplines so as better to understand such diverse phenomena as perception, language, reasoning, and consciousness.
Even so, the subjective essence of “what it means” to be conscious remains an issue that is very difficult to address scientifically.
To really understand what is meant by the cognitive neurosciences, one must recall that until the late 1960s, the various fields of brain research were still tightly compartmentalized. Brain scientists specialized in fields such as neuroanatomy, neurohistology, neuroembryology, or neurochemistry.
Nobody was yet working with the full range of investigative methods available, but eventually, the very complexity of the subject at hand-made that a necessity.
The first problem that arises when examining consciousness is that a conscious experience is truly accessible only to the person who is experiencing it. Despite the vast knowledge we have gained in the field of mathematics and computer science, none of the data processing systems we have created needs subjective experiences in order to function.
None feel pain, pleasure, anger, or love.
These emotions are vanishing into algorithms that are or will have an effect on how we see the world but also how we live in it.
If not address now all moral and political values will disappear, turning consciousness into a kind of mental pollution. After all, computers have no minds.
Take images on Instagram they can affect mental health and body image.
You might say so what that has always been the case. And you would be right up to now but because of Covid-19 government has given themselves wide-ranging powers to collect and analyze data, without adequate safeguards.
If we are not careful they will have no notion of self, existing only in the present unaware of the past or future, and therefore will be unable to consciously plan for future eventualities.
Unconscious algorithms in our brains rather than conscious images in a mind.
If you are using a smartphone, it indirectly means that you are enjoying the AI knowingly or unknowingly. It cannot be modified unknowingly or can’t get disfigured or breakdown in a hostile environment.
We should not be regulating technology but Artificial Intelligence.
It is so complicated in behavior we need to be regulated it at the data level.
In lots of regulated domains, there is this notion of post-market surveillance, which is where the developer bears the responsibility of how the technology developed by them is going to be used.
As William Shakespeare wrote in – As you Like it.
” All the world is a stage, and all the men and women merely players, they have their exits and entrances. ”
Sadly with AI, Machine Learning Algorithms no one knows or for that matter will ever know when they enter or exit.
Probably like AI learning is actually an ongoing process that takes place throughout all of life. It’s the process of moving information from out there — to here. Unfortunately with the brain, has its own set of rules by which it learns best, unlike AI, the information doesn’t always stick. Together, we have a lot to learn.
Humanity is in contact with humanity.
All human comments appreciated. All like clicks and abuse chucked in the cloud bin.
social media oligarchy where the richest participants are allowed to spread dangerous
.
In these extraordinary times, I am sure I speak for world citizens that we count on our leaders to bring out their statesmanship and have the courage and imagination to think and work together to fight this pandemic in equally extraordinary ways.
We may be about to face the perfect storm:
A humanitarian disaster, global recession, severe de-globalization, the crash of healthcare systems, social breakdown, conflicting nationalism not forgetting the power of AI, and its algorithms all point to the need for value realignment.
Many of the issues have a history of a basis. So potential risks and ways to approach them are not as abstract as we may think.
How do we actually design a new system that can understand and implement the various form of preference and values of a population?
The ideal system is, of course, a balance between all the needs of the numerous stakeholders the people, and the earth we all live on.
So how do our societies reconcile their own historic aspirations while we are struggling with a world of ARTIFICAL INTELLIGENCE that is isolating us all into data?
Neither China nor the US, Iran, Indonesia, or any country can insulate themselves from what is to come. COVID-19 should be the exception to — not the extension of –geopolitical rivalry. It should be an opportunity to recover trust rather than advance mistrust.
HOWEVER, WHAT WE WILL WITNESS IS OUR COLLECTIVE INABILITY TO ACT AS ONE. (DUE TO A MENSTRUUM OF REASONS FAR TO LONG TO ADDRESS HERE.)
From what we see to date:
With the erosion of democratic institutions, with the rise of the right, loss of jobs, false news, rising inequality, foodbanks, our inability to tackle Climate change, stop wars, without any robust mechanisms of oversight and accountability for Profit-seeking algorithms there seems little hope for future generations.
Artificial intelligence now embedded in our daily lives has still to show empirical evidence that validates that AI technology will achieve a broad base of social benefit we aspire to.
We need a community of researchers worldwide to really understand the range of potential harms that AI systems pose. The use of data, machine learning, their applications to society – Face recognition -Track and Trace- all in use without any regulations.
Therefore there is only one solution to the problems facing us all and that is the introduction of a basic living wage for all.
Why?“
Because Cash is the best thing you can do to improve health outcomes, education outcomes, and lift people out of poverty. It’s the only solution to an economy where a small group of people is getting very, very wealthy while everyone else is struggling to make ends meet.
It would remove the problem with existing welfare programs that keep people below the poverty line a form of structural inequality.
It would also cost governments less simplifying welfare programs.
A guaranteed income would give young couples the confidence they need to start a family.
From a macro viewpoint, it would give society a much-needed ballast during a Depression.
It would offset job losses caused by technology.
What are the downsides?
Inflation.
Who funds it?
Many would support it if tech companies with profit-seeking algorithms paid for it.
High-frequency trading.
Hedge Funds, Sovereignty wealth funds, and currency trading over $50,000
Cash is King.It’s an idea that is long overdue.
Both the Current pandemic and Automation are fundamentally changing the structure of the economy. Proposals for various forms of regular cash assistance are increasingly part of the political conversation. And in fact, the cash payments of 2020 are serving as something of a real-life test of the principles behind UBI, even if there are important differences.
All human comments appreciated. All like clicks and abuse chucked in the bin.
Remember when people use to initially judge you by your handshake. It formulated a picture of a person we were meeting for the first time.
In the span of a few seconds, it lay the foundation for how others perceive and feel about us — and we about them.
“It was wet,” “It was creepy,” ” It was firm,” It was crushing,” “It a Mormon handshake,” “It a Mason probing handshake”, enthusiastic, vigorous, prolonged, high-fives to fist-bumping.
A handshake was a globally widespread, brief greeting or parting tradition in which two people grasp one of each other like hands making impressions that have a very long shelf-life based on a brief but important meeting.
Your handshake is the business card you leave behind.
Believed by some to have originated as a gesture of peace by demonstrating that the hand holds no weapon.
It is a reassuring tactile touch that we as social animals share is essential for social interaction, social harmony, health, survival, and security, as well as for communicating our true feelings.
It serves as a means of transferring social chemical signals between the shakers.
What is even more startling is how long we remember those bad handshakes — sometimes we remember for decades.
Today we pay for items with the swipe of our phone or by inserting a small plastic card into a reader. The old handshake just doesn’t have its place anymore.
We can also spend thousands of hours clicking a mouse over a small image on a computer screen. Nothing is real, nothing is said – only ones and zeros racing around the globe in small packets of data.
The world of technology continues to tractor us into a world absent of looking at one another in the eyes the Art of the handshake is dead.
With, Social media, Face recognition, Instagram, Facebook, Smartphones, Emails etc our most valuable currency of the handshake is evaporating and being replaced by digital signatures or passwords, that are undermining our trust in each other.
It’s no wonder that so many people get something so simple as a handshake wrong.
Take the Politician’s Handshake:
Two hands to cover or cup the other person’s hands twisting the other person’s hand so that yours is superior or playing hand jujitsu to let the other person know you are in charge is just rubbish.
In the real world-shaking a person’s hand allows you to establish your friendliness and accessibility.
For example meeting your future in-laws for the first time, your first job interview.
It might be true that in the future daily and weekly media will be more and more electronic, but physical media will always exist.
Stand in front of the webcam and send a digital emoji and you could be shaking hands with the devil.
You cannot reproduce a handshake with meaning electronically.
This is a part of the beauty and the freakiness of the internet no handshake required.
Its no wonder there is grooming.
There was a time that a person had to put on nice clothes and go out into the real world to meet a love interest.
Today, you can be “out there” without ever having to go out- online dating.
You can even engage in a virtual relationship by using email or instant messaging. It is possible to get to know a person on a relatively deep level without ever meeting at all.
Customs surrounding handshakes are specific to cultures and can offer some real benefits. Take Brazilan negotiators they touch each other almost five times each half-hour where there is no physical contact between American negotiators.
In postmodern society, superstitions don’t have much of a place, for most of history they have a played a huge role in shaping culture and society before the arrival of the handshake.
The internet cares not what you do. You miss out on real contact with people.
It is affecting our ability to connect with others as equals. Not being able to manage the normal tasks of adult living resulting in more and more limper handshakes. Which leads them to problems with society and unable to get along with others.
Although teens are staying in constant contact via the Internet and texting, these friendships do not foster trust and intimacy the same as face-to-face contact.
The century’s old practice to seal a deal may seem quaint but its importance in the future will tell us whether its a robot or not.
As the appreciation of small things disappears; nature loses its brilliance.
Our planet is in a tight spot lets shake hands on that.
As we know there can be no peace no universal action on anything without it.
All the verbal diarrhoea in the world cannot replace it.
All human comments appreciated. All like clicks and abuse chucked in the bin.
Is it time we started to demand that if you use my personal data it’ill cost you because I am worth it.
We all make a trade-off between security and convenience, but there is a crucial difference between security in the old-fashioned physical domain, and security today.
Security is done digitally with algorithms exploiting and analysing your very mood.
In this digital lifestyle, it is nearly impossible to take part in the web world without leaving a trail behind.
Personal privacy is dead.
We have no clear sight into this world, and we have few sound intuitions into what is safe and what is flimsy – let alone what is ethical and what is creepy.
We are left operating on blind, ignorant, misplaced trust; meanwhile, all around us, without our even noticing, choices are being made.
With the increasing ownership of mobiles, marketing companies now have unlimited access to our personal data. Every site one opens has an agreement form to be ticked with terms and conditions that are all but unreadable on small screens.
It’s not a choice between privacy or innovation, it is the erosion of legally ensured fundamental privacy rights interfacing with apps.
Nuggets of personal information that seem trivial, individually, can now be aggregated, indexed and processed.
When this happens, simple pieces of computer code can produce insights and intrusions that creep us out or even do us harm. But most of us haven’t noticed yet.
Since there’s no real remedy, giving away our most sensitive and valuable data, for free, to global giants, with completely uncertain future costs, is a decision of dramatic consequence.
iCloud and Google+ have your intimate photos; Transport companies know where your travelcard has been; Yahoo holds every email you’ve ever written and we trust these people to respect our privacy.
You only have to be sloppy once, for your privacy to be compromised.
With your Facebook profile linked, I could research your interests before approaching you.
Put in someone’s username from Twitter, or Flickr and Creepy will churn through every photo hosting service it knows, trying to find every picture they’ve ever posted.
Cameras – especially phone cameras – often store the location where the picture was taken in the picture data. Creepy grabs all this geolocation data and puts pins on a map for you.
Then comes an even bigger new horizon.
We are entering an age of wireless information. The information you didn’t know you were leaking.
Maybe the first time you used a new app.
Every device with Wi-Fi has a unique “MAC address”, which is broadcast constantly as long as wireless networking is switched on.
Many shops and shopping centres, for example, now use multiple Wi-Fi sensors, monitoring the strength of connections, to triangulate your position, and track how you walk around the shop. By matching the signal to the security video, they get to know what you look like. If you give an email address in order to use the free in-store Wi-Fi, they have that too.
Once aggregated, these individual fragments of information can be processed and combined, and the resulting data can give away more about our character than our intuitions are able to spot.
When I realised that I’m traced over much wider spaces from one part of town to another I asked myself what is the point in giving you information away when you could franchise it out and get something back in return.
Public debate on the topic remains severely stunted.
Through the current trends in the globalization of technology is in the knowledge society, we have to start asking where is the world moving to?
The concepts and applications of biocomputing, medical informatics, anthropocentric computing, high-performance computing, technological diffusion, predictive analysis tools, genetic algorithms, and cultural informatics all in new or little known fields of information technology.
Many organisations create, store, or purchase information that links individuals’ identities to other data. Those who can access and analyse this personal data profiles can take deep insights into an individual’s life.
A law-abiding citizen might say “I have nothing to conceal.” This is a misconception.
In any debate, negotiation or competitive situation, it is an advantage to know about the other party’s position in order to achieve one’s own desired outcome.
Data brokers – buy and combine data from various sources (online and offline) to deliver information on exactly defined target groups to their customers.
“Click-world” merchants know a lot more about their clients’ private and financial habits than the individual knows about the merchant company or its competitors.
You, therefore, could not be blamed for asking -given the increasing bargaining position of merchants, is the consumer still getting a good deal?
It would be interesting to know how good a deal consumers get when they exchange their data for free-of-charge online services.
Data has become an economic good for which the “producer” is usually not remunerated.
Data privacy is a matter of choice and individuals should have the right to decide if a company can collect information on them.
Is there a solution:
Of course if you Google it what you will get will be all sorts of advice such as, avoid cookies, use the VPN or disabling the location tracking in your devices and use Browsers that don’t track your activities.
It’s tempting to just play ostrich and put our heads under the sand however data collection is affecting and will affect your life.
This is why we must preserve the right of individuals to know what kind of information is being collected and what is being done with that information.
You could say that the most valuable thing on your computer or network is the data you create. After all, that data is the reason for having the computer and network in the first place.
The first thing to understand is that there is very little that can “prove” that any company (whether an individual, government entity, corporation, etc.) is engaged in safe or adequate data handling processes.
Therefore :
We must retain the right to define our own privacy boundaries and then advocate for those boundaries before invasions in our daily lives become out of control and irreversible.
All human comments appreciated. All like clicks and abuse chucked in the bin.
This technology is a crucial part of one of the most extensive, intrusive, and oppressive surveillance apparatus in history.
All over the world private businesses, law enforcement agencies, and national governments are using facial recognition algorithm systems.
You only have to look at China home to one of the world’s most powerful facial recognition systems and advanced street surveillance cameras in the world. Equipped with one of the world’s largest photo identification databases and nearly 200 million surveillance cameras, China is at the cutting edge of facial recognition technology, one that is capable of tracking more than a billion people.
Jaywalkers are already learning that they could suddenly find their face projected on screens erected along the streets. Once they identify your face, all your information (like mobile phone number) is linked.
It even has public toilet paper dispensers that remember the user’s face.
All contributing to an overly oppressive surveillance state with unprecedented power to track people going about their daily lives.
You might say that’s all of this is incompatible with a healthy democracy like ours!
How would you feel if you knew that every time you went out in public you were being watched and could easily be identified through this technology?
How would it change your behaviour?
Large databases — such as social media profiles, financial transactions, and telecommunication signals — may begin working in tandem, as a backend service, with recording devices to correlate, analyze and extract even greater amounts of granular information about an individual once he/she is recognized on a facial recognition device.
The recording is and can extract a host of interconnected inferences about an individual’s associations, subtle proclivities, nascent behaviours, and more.
It is going to end up breaking our fundamental rights.
Think about it, what happens when they know your face?
You can say there are positives which there are but they do not outweigh the negatives.
Cameras around will be so advance they will have the where with all to know you.
Your face and info will be in some random person hands with a click of a button.
It’s too early to completely ban this technology?
If surveillance does have public safety value, is it irresponsible not to use it? Could a ban limit its future development and potential? Or, is outlawing it the best way to make sure it doesn’t spiral out of control?
It is hard to deny that there is a public safety value to this technology.
What is urgently needed is regulations that acknowledge the usefulness of face recognition. Banning users of commercial face recognition technology from collecting and sharing data for identifying or tracking consumers without their consent.
There are seriously conflicting interests here, but I think my principal objection is the use of data obtained to profile me or target me for advertising.
I think that aspect of technology has advanced much too far beyond what our laws are prepared to deal with.
I don’t believe there is any real expectation of privacy in a public place.
We are constantly watched on security cameras virtually everywhere, and I don’t have a real problem with that, so long as my face is not tied to my name or other personally identifiable information.
Fair restrictions must be written into law to be effective.
The next thing we will see is face recognition combined with body gesture recognition edging over into e-commerce through unmanned stores.
All human comments appreciated. All like clicks and abuse chucked in the bin.