When you hear the word slavery, it conjures images of shackles, mistreated people of color, forced to work.
This image was once a true, vivid picture;
However, the term slavery has broadened, and now slavery comes with many more definitions creating a new image for the vile term slavery.
THERE IS A NEW, MODERN,INVISIBLE SLAVERY THAT ENSLAVES PEOPLE AND THEY DON’T EVEN KNOW IT ! !! AND MOST OF US CHOSE THIS FORM OF SLAVERY.
This new strain is much more virulent and deadly adding hundreds of thousands of new slaves to the mix every minute of the day – Algorithms Slavery. A hidden world programming its self.
Slaves are cheap these days.
There are an estimated as high as 45.8 million people in modern slavery around the world. More than in the 18th century at the height of the transatlantic slave trade.
Simply knowing the statistic that 45.8 million individuals are enslaved in our world is not enough to put an end to the malpractice of modern-day slavery.
We all can and should play a part in the international advocacy for the freedom and rights of all, not only as fellow human beings but also as concerned community leaders and consumers in the global economy.
They’re the step-by-step instructions working quietly behind the scenes of everyday life; in internet search engines, satnavs, air traffic control, and food delivery services.
Companies and governments increasingly rely upon algorithms to make decisions that affect people’s lives and livelihoods – from loan approvals to recruiting, legal sentencing, and college admissions – from internet search results to product recommendations, dating matches, and what content goes up on our social media feeds.
Slavery today includes:
10 million children.
24.9 million people in forced labor.
15.4 million people in forced marriage.
4.8 million people in forced sexual exploitation.
Human trafficking and slavery are the fastest-growing illegal activities in the world today.
Keep the National Human Trafficking Resource Center’s 24/7 confidential hotline handy.
Saving this number in your contacts and using it whenever suspicious of having seen a victim of human trafficking is one of the easiest and most effective ways to aid law enforcement officials in uncovering exploitation, bringing traffickers to justice, and victims to freedom and restoration.
——————
Algorithms have been rising fast and saturating our modern world.
We should not take the path of least resistance by sitting in judgment on the past while ignoring the injustices of our day.
Most algorithms in the world today are created and managed by for-profit companies, and many businesses regard their algorithms as highly valuable forms of intellectual property that must remain in a “black box.”
Every time a site is opened we are confronted with an Agreement Templates a choice to Agree or not. Many websites prompt you to agree to their terms of use before you can register on the website or even use it.
There are two different types of website agreements: browsewrap and clickwrap.
A browsewrap agreement is connected to the main page of the product by a hyperlink. The hyperlink leads to another webpage that will have the terms and conditions of the agreement detailed.
A clickwrap agreement is designed to ensure that the user has a chance to see the terms of use and they must also actively agree to the terms in order to agree. (This one is more legally binding.)
But are not transparent as they do not reveal the source code, inputs, and outputs of the algorithm that is running the site.
Without this transparency, the question is how can they be legally binding.
Specifically, machine learning algorithms – and deep learning algorithms in particular – are usually built on just a few hundred lines of code. The algorithm’s logic is mostly learned from training data and is rarely reflected in its source code. Which is to say, some of today’s best-performing algorithms are often the most opaque.
This is the new form of slavery. Now being promoted by track and trace, with the current Coivid pandemic digital certifications that no one knows how or who will control, the data that they are now producing and in the future.
It suggests that technical transparency – must become law.
Essentially such laws would mandate that users be able to demand the data behind the algorithmic decisions made for them, including in recommendation systems, credit, and insurance risk systems, advertising programs, and social networks.
In doing so, it tackles “intentional concealment” by corporations.
But it doesn’t address the technical challenges associated with transparency in modern algorithms. Here, a movement called explainable AI (xAI) might be helpful.
However, this approach merely shifts the burden of belief from the algorithm itself to the regulators.
In the world of data analytics, it’s frequently assumed that more data is better.
But I firmly believe that the resistance to getting vaccinated is founded on this dilemma of trust.
Risk management, data itself is often a source of liability. That’s beginning to hold true for artificial intelligence as well.
All human comments are appreciated. All like clicks and abuse chucked in the bin.
I don’t know if like me you are getting sick to death of hearing and reading the following phrases:
WE NEED TO. WE VALUE YOUR PRIVACY, WHAT LESSON SHOULD WE LEARN. LET ME BE VERY CLEAR. 110 PERCENT. ETC.
Unfortunately, we treat the future like a distant colonial outpost devoid of people where we can dump ecological degradation, technological risk, nuclear waste, and the public debt, and that we feel at liberty to plunder as we please.
If you have a young child, she or he will likely talk to computers — naturally, as he or she does with you — for the rest of his or her life and the computer will not need to or learn lessons.
We are standing on the precipice of life-altering technologies, but unable to break free from a continuous cycle of surprise and fear because we can’t come together to address collectively the existing problems not to mention what is awaiting us all down the road.
Global warming is the greatest existential challenge of our age, requiring massive societal changes to mitigate and adapt to it.
However, there is another threat that is being ignored to our peril.
With politicians (the vast majority of whom do not have any background in science or technology) unable to look past the next election, making important policy decisions with little regard to how they will affect the planet and country 20, 50, or 100 years from now.
This is why Governments need to set up a Department for the Future, depoliticized technology and science.
The citizens of tomorrow are granted no rights. There are no government departments or world organization bodies to represent their concerns or potential views on decisions today that will undoubtedly affect their lives.
Representative – democracy systematically ignores the interest of future people.
The world is presently experiencing a new form of colonization not by wars but by Digital Data, combined with climate change.
This colonization is presently happing between China and the USA.
—————–
The “Digital Divide,” is the gulf between those with access to both the necessary technology and the information accessible with it and those who do not.
The immediate concern is that those with the technology will acquire the necessary skills for the twenty-first century and those without will not, further widening the economic chasm between the lower-income strata and those who manage the data.
Technology has an obsoleting impact on those without the proper skills and, with the speed at which the technology changes, it is very difficult – near impossible for some – to keep current.
This will become even more of a concern when the wealthier private and public school systems began to acquire personal computer networks and internet connections while schools in poorer neighborhoods will not.
Those who grow up with technology assimilate it into themselves;
“WE value your Privacy “
—————————
There will constantly be new tools – the cloud, big data, location analysis, etc. – and ones of which we have not yet heard.
In A Data-Driven World, it will be too late unless we establish an organization
that can understand the context of all Future interactions.
Those who do not embrace them may be ambushed by them and by a younger generation pushing them out the door.
When it comes to Robots.
The Three Laws are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
By Isaac Asimov in his 1942 short story “Runaround.”
All human comments are appreciated. All like clicks and abuse chucked in the
≈ Comments Off on THE BEADY EYE ASKS: WHEN ARE WE GOING TO RECOGNIZE THE ENORMOUS AND UNDENIABLE POWER THAT NATURE HAS OVER CIVILIZATION AND OVER ITS POLITICS ?
Scientists have not suggested that climate played any direct role in causing the current COVID-19 outbreak, but no matter how one looks at what is happing climate change is making outbreaks of disease more common and more dangerous which promises to amplify the harm and make even unrelated crises more painful.
You would have to be blind, deaf, and dumb not to recognize nature’s enormous and undeniable power over civilization and even over its politics, and we urgently need a shared vision of basic values to provide an ethical foundation for the emerging world community.
You don’t have to be a scientist to know that the loss of biodiversity and encroachment by civilization will help new viruses jump to people and that we are still turning a blind eye to the fact that our behavior is driving this.
There are undeniably numerous reasons alone that may make the pandemic prologue for more far-reaching and disruptive changes to come. However, it is also clear that climate policy today is indivisible from efforts to prevent new infectious outbreaks.
Slow action on climate has made dramatic warming and large-scale environmental changes inevitable. It is now up to the public and leaders to understand that it’s human behavior driving the rise in disease, just as it drives the climate crisis.
The current pandemic and climate change are both demonstrating this in real-time.
Today, hurricanes are larger and more intense than ever. Fires are spreading faster and further amid drought, their total size having doubled in recent years.
So what can be done to speed up the transformation of how we manage our world?
Promoting sustainability requires a total transformation of how we think, buy and use earth resources.
This transformation is now in its infancy.
The seeds of the values-based approaches ‘Ethical’ values such as trust, integrity, justice, and compassion are usually neglected in sustainability assessment.
The authorities tasked with responding to it will already be consumed by other emergencies, their capacity to provide even the most fundamental aid limited, their budgets gutted.
It is time that the Advertising industry step up to the challenge.
While seemingly benign advertising is one of the key drivers behind climate change by promoting excessive consumption.
Advertising is an excellent form of communication, capable of delivering a wealth of information to consumers on varying topics. It has become so powerful and so subtle that consumers accept most advertising content without question.
On the other hand, advertising to sell products is willing to sacrifice the health and welfare of the consumer, turning them into gluttons, and lowering the moral standards of our society.
By their very nature Advertising, language, logic, and action force separation, discrimination, and choice.
Advertising creates excess “want” in a society that does not know the meaning of “need.”
Marketing in its various forms continues to grow through mobile, content marketing, social platforms, and new digital platforms. Now Worth $1.2 Trillion.
Digital platforms like Facebook, Twitter, Linked In, Instagram, Snapchat, and Google AdWords have a glaring responsibility to introduce green-field technology that creates genuinely new opportunities for advertisers and marketers, not to mention consumers.
To assist in this transformation, we must start to legislate against advertisements that promote unsustainable consumption for the sake of short-term profit.
Media outlets should be legally required to remove any advertising that does not meet the values of sustainability.
The ability to motivate an entire group to strive toward a specific goal is a major part of what makes a good Advertisements.
————–
At the end of the day, the creation of new indexes and methodologies to measure human and economic development are needed, since they will provide us with a wider toolkit to analyze our main subject: economic sustainable growth.
Well-established measures such as Gross Domestic Product (GDP) should no be the key determinants of national policies and legislation.
GDP can remain as a complementary indicator to development, but it is not an adequate indicator when considered on its own.
It represents the value of all goods and services produced over a specific time period within a country’s borders.
We know that in an economy, GDP is the monetary value of all final goods and services produced while it is totally removed from the damage it creates to our core values.
However, it fails to account for the multi-dimensional nature of development or the inherent shortcomings of capitalism, which tends to concentrate income and, thus, power.
If we continue to concentrate on GPD disasters that might have otherwise proved manageable will compound and amplify COVID’s effects until the hurt — measured in lives, livelihoods, and property damage — winds up worse than it might have been from anyone disaster alone.
Throwing money at a problem doesn’t work without core values determined from that starting point. One of the limitations of GDP is that it only addresses average income, failing to reflect how most people actually live or who benefits from economic growth.
———————–
What’s known as biodiversity is critical because the natural variety of plants and animals lends each species greater resiliency against threat and together offers a delicately balanced safety net for natural systems.
As diversity wanes, the balance is upset, and remaining species are both more vulnerable to human influences and, according to a landmark 2010 study in the journal Nature, more likely to pass along powerful pathogens.
The human spirit has consistently sought to transcend the material, biological, physiological, and technological limitations. but the present world problems won’t be solved by technology because no feasible technology will sufficiently decouple economic activity and environmental impact.
Take profit-seeking algorithms for example.
The urgency of the climate change crisis really can’t be overstated. Unless action is taken across all levels of society over the next decade, we’re looking at a near future of droughts, flooding, and poverty for hundreds of millions of people.
So here’s the existential crisis for adland:
The more effective it is at selling products to consumers, the worse the climate crisis gets.
THE ADVERTISING INDUSTRY HAS A MORAL DUTY TO STOP EXCESSIVE CONSUMPTION. TO STOP PROMOTING EAT MORE, BUY MORE, GAMBLE MORE. DIE SOONER.
All human comments are appreciated. All like clicks and abuse chucked in the bin.
Artificial Intelligence might be a term for collecting concepts that allow computer systems to vaguely work like a brain. However, the use of numbers to represent complex social reality is flawed.
AI might seem factual and precise when it isn’t as the results that AI produces depend on how it is designed and what data it uses.
At the moment in our everyday world, AI performs narrow tasks such as facial recognition, natural language processing, or internet searches but the pace of its progress is exponential and regardless of its benefits.
The impact it is having is hard to ignore with more and more of the world’s commerce becoming automated and trading going online.
It’s transforming our world and will impact all facets of society, economy, living, working, healthcare, and technology in the not-so-distant future. It’s poised to have a major effect on sustainability, climate change, and environmental issues.
Anybody making assumptions about the capabilities of intelligent software is capping out at some point is mistaken.
The applications of AI is now in, industry, healthcare, and medical diagnostics, transport, agriculture, education, and economics, machine and deep learning, data analytics, knowledge reasoning, and discovery, natural language processing, computer vision, robotics, as well as social sciences, ethics, legal issues, and regulation all have implementations in modern society.
And that is only a drop in the ocean.
It’s in automated reasoning and inference, autonomous agents and multi-agent systems, artificial consciousness, case-based reasoning, representation, neuro-inspired computing, process improvement and planning, robotic process automation, symbolic reasoning.
AI-augmented immersive (VR, AR, MR & XR) reality, AI-enabled customized manufacturing, AI-enabled data-driven techniques, AI in cyber-physical systems, AI in image analysis and video processing, AI in perception and multimedia sensing, AI-supported sensors, IoT and smart cities, AI in education, autonomous vehicles, business and legal applications of AI, cognitive automation, hyper-automation, digital twins, healthcare, medical diagnosis and rehabilitation, robotics and robot learning, human-robot/machine interaction, industrial AI and optimization, symbiotic autonomous systems, as well as many others relating to AI.
Unless you have direct exposure to groups like Deepmind most of us have no idea where it is leading us and the possibility of something, seriously dangerous happening is in the five-year timeframe. 10 years at most.
The time is now to determine what dangers artificial intelligence poses.
Already the legal, political, societal, financial, and regulatory issues are so complex and wide-reaching that it’s necessary to take a look at them no not tomorrow.
AI’s role is now so widely accepted that most people are completely unaware of it.
If we don’t its usage will lead to separation and polarisation in the public sphere and manipulate elections.
It has permeated the key sectors of most developed economies with profit-seeking Algorithms that are promoted as if they are flawlessly but they are built on human bias.
It is removing the ability to make judgments and our sense of responsibility.
It cannot explain or justify reaching a decision or action in the first place.
It is creating impersonal bureaucracies and leaders.
It feeds on historical data.
Russia’s President Vladimir Putin said: “Whoever becomes the leader in this sphere will become the ruler of the world.”
Apart from autonomous weapons gaining “minds of their own, AI’s power for social manipulation has already proven itself – Brexit – the 2016 U.S. presidential election – the Arab spring – China’s social credit system – the invasion of privacy which is quickly turning to social oppression.
At the present, we are not designing accident-free artificial intelligence, or are we aligning current systems’ behavior with our goals or core values.
Mitigating risk and achieving the global benefits of AI will present unique governance challenges, and will require global cooperation and representation.
More advanced and powerful AI systems will be developed and deployed in the coming years, these systems could be transformative with negative as well as positive consequences.
As AI systems become more powerful and more general they may become superior to human performance in many domains. If this occurs, it could be a transition as transformative economically, socially, and politically as not the Fourth Industrial Revolution but the Revolution of Monopolies.
It’s now possible to track and analyze an individual’s every move online, and if a covid-19 passport comes into existence it will be issued by AI.
It’s capable of generating misinformation at a massive scale and if not already we won’t be able to tell what’s true or real online and what’s fake including Covid passports.
If we aren’t clear with the goals we set for AI machines, it could be dangerous if a machine isn’t armed with the same goals we have.
It’s not hard to imagine an insurance company telling you you’re not insurable based on the number of times you were caught on camera talking on your phone.
While there are many uncertainties, we should dedicate serious effort to laying the foundations for future systems’ safety and better understanding the implications of such advances.
The international governance of artificial intelligence (AI) is at a crossroads: should it remain fragmented or be centralized?
Fragmentation will likely persist for now.
Society’s collective governance efforts may need to be put on a different footing.
An important challenge is to determine who is responsible for damage caused by an AI-operated device or service:
It is undesirable from a human rights perspective that there are powerful publicly-relevant algorithmic systems that lack a meaningful form of public scrutiny.
Without proper regulations and self-imposed limitations, critics argue, the situation will get even worse. It is gobbling up everything it can learn about you and trying to monetize it.
There is a real risk that commercial and state use has a detrimental impact on human rights.
Our situation with technology is complicated, but the big picture is rather simple.
All AI should be under law required to have atransparency switch.
The human brain is a magnificent thing that is capable of enjoying the simple pleasures of being alive. Ironically, it’s also capable of creating machines that, for better or worse, become smarter and more and more lifelike every day.
AI will affect what it means to be human, be productive, and exercise
free will. People will become even more dependent on networked artificial
intelligence (AI) in complex digital systems.
Every time we program our environments, we end up programming ourselves
and our interactions. AI have massive short-term benefits, along with long-
term negatives that can take decades to be recognizable. AI is a tool that will
be used by humans for all sorts of purposes, including in the pursuit of
power. At stake is nothing less than what sort of society we want to live in
and how we experience our humanity. We already face an ungranted
assumption when we are asked to imagine human-machine ‘collaboration.
We cannot expect our AI systems to be ethical on our behalf – they won’t be,
as they will be designed to kill efficiently, not thoughtfully.
For now, AI will continue to concentrate power and wealth in the hands of a
few big monopolies based on the U.S. and China. Most people – and parts of
the world – will be worse off.
Unfortunately, we are still unripe for the unity of humanity.
We require further development until we develop a sincere desire for
humanity’s unity, as well as the realization that it is impossible to achieve
that goal on our own. If we just bumble into this world of AI unprepared, it
will probably be the biggest mistake in human history.
The COVID-19 virus will one day be all but forgotten, but the dystopian
systems that the New World Order is right now putting in place will not.
All human comments are appreciated. ll like clicks and abuse chucked in the bin.
( A Thirty-minute read)
Do you have a right to believe what you want?
Yes, of course, but we now live in an Algorithmic driven world that is blurring the boundaries and amplifying the social tensions that are festering under the surface.
The problem is that we are allowing the building of technologies, that are making consequential decisions about people’s lives.
AI is shaping people’s lives on a daily basis, but it’s an open question whether AI will become a trusted advisor or even a corrupting force.
It’s not COVID-19 that will kill us all its Profit-seeking algorithms.
However, here in this post, my main concern is whether the AI techniques will develop into quantum algorithms that will be totally out of control.
If artificial general intelligence is on the not too distant horizon, surely we should be ensuring that it is not owned by anyone corporation and that at its core it respects our core values.
To achieve this we cannot surely let wealth be concentrated in fewer and fewer hands, or to be let to the marketplace, or any world organization that is not totally transparent and self-financing.
We therefore as a matter of grave urgency need a new world organization that vets all technology, and algorithms. (See previous posts)
As long as the ALGORITHMS don’t go to war with each other and cause something even more difficult to diagnose than a crash on the stock markets they are safe is as naive as saying ” It’s going to be Great.”
AlGORITHMS are increasingly in charge of a world that is precious to us all.
Basically, we’re entering the era of machines controlling everything.
If we want to create new different societies with human dignity for all we need to do something about it.
The difficulty of predicting the future is not just a cliche, it’s a basic fact of our existence. Part of the hypothesis of Singularity is that this difficulty is just going to get worse and worse. Yes, creating AGI ( Artificial General Intelligence) is a big and difficult goal, but according to known science, it is almost surely an achievable one.
However, there are sound though not absolutely confident arguments that it may well be achievable within our lifetimes.
If artificial general intelligence is on the not too distant horizon, surely we should be ensuring that it is not owned by anyone corporation and that at its core it respects our core values.
If we think in months we focus on immediate problems such as the present-day wars, the Covid crisis, the Donald Trumps, the economy, if we think in decades, climate, growing inequality, the loss of jobs to automation are all presenting dangers. But if we look at life in total, science is converging on data processing and AI that is developing itself with algorithms.
When intelligence is approached in an incremental manner, with strict reliance on interfacing to the real world through perception and action, reliance on representation disappears.
It won’t be long before we will not be unable to distinguish the real world from the virtual world.
Since there is only one real world and there can be infinite virtual worlds the probability that you will inhabit this sole world is zero.
So it won’t matter whether computers will be conscious or not.
Is starting to feel like it’s every man for himself, Is possible that right now, a global crisis is upon us, Without even knowing… And the virus may not be the biggest threat, but the crisis that follows, Everyday goods that keep us alive will be gone, I’m talking, food, freshwater, medicine, clothes, fuel…Intelligence is decoupling from consciousness and soon rather than later it will be consigned to Google, Facebook, Twitter, Smartphones, and the like to make decisions that are not possible to reverse.
You might think that the above is stupid but it won’t be long before we will be witnessing the most unequal societies in history.
——————————
We humans will soon be living with robots that process data without any subjective experiences or consciousness or moral opprobrium.
As we watch robots, autonomous vehicles, artificial intelligence machines, and the like slowly (and sometimes rapidly) permeate our world, it’s not hard to imagine them going from permeating to taking over.
Algorithms are increasingly determining our collective future.
It will only matter what they think about you.
We are already halfway towards a world where algorithms run everything.
This is why many of the issues raised in this post will require close monitoring, to ensure that the oversight of machine learning-driven algorithms continues to strike an appropriate and safe balance between recognizing the benefits (for healthcare and other public services, for example, and for innovation in the private sector) and the risks (for privacy and consent, data security and any unacceptable impacts on individuals).
——————————WHAT CAN GOVERNMENTS DO?
Please regulate AI, this is too dangerous.
Given the international nature of digital innovation, governments, should establish audits of algorithms, introducing certification of algorithms, and charging ethics boards with oversight of algorithmic decisions.
Why?
They are bringing big changes in their wake.
From better medical diagnoses to driverless cars, and within central governments where there are opportunities to make public services more effective and achieve long-term cost savings.
However, the Government should produce, publish, and maintain a list of where algorithms with significant impacts are being used within the Central Government, along with projects underway or planned for public service algorithms, to aid not just private sector involvement but also transparency.
Governments should not just simply accept what the developers of algorithms offer in return for data access.
To this end, Governments should be at the forefront of the creation of a “statutory building code”, which describes mandatory safety and quality requirements for digital platforms.
Social networks should be required by law to release details of their algorithms and core functions to trusted researchers, in order for the technology to be vetted.
This Law should enable the enforcement of,
forcing social networks to disclose in the news feed why content has been recommended to a user.
limiting the use of micro-targeting advertising messages.
making it illegal to exclude people from content on the basis of race or religion, such as hiding a spare room advert from people of color.
banning the use of so-called dark patterns – user interfaces designed to confuse or frustrate the user, such as making it hard to delete your account.
labeling the accounts of state-controlled news organizations.
limiting how many times messages can be forwarded to large groups, as Facebook does on WhatsApp.
If we took the premise that people should have a lawful right to be manipulated and deceived, we wouldn’t have rules on fraud or undue influence.
———————————–To days Algorithms and where we are.
As data accumulates, even more so now with Covid- 19 track and trace, and now working from home we have more centralized data depositories and large centralized AI models that work off centralized or decentralized data.
How does the concentration of power affect this balance that impinges on individual liberty?
Our democratic institutions and public discourse are underpinned by an assumption that we can at least agree on things that are true.
Facebook, Twitter, and YouTube create algorithms that promote and highlight information. That is an active engineering decision. Regardless of whether Facebook, Twitter profits from hate or not, it is a harmful by-product of the current design and there are social harms that come from this business model.
Platforms that monetize user engagement have a duty to their users to make at least a minimum effort to prevent clearly identified harms.
We have to focus on the responsibility of platforms.
Because people are being manipulated with objectively false information, there has to be some kind of accountability for platforms.
Currently, these platforms are not neutral environments they have no common understanding that there are certain things that are manifestly true with algorithms making decisions about what people see or do not see.
In most Western democracies, you do have the freedom of speech.
But freedom of speech is not an entitlement to reach. You are free to say what you want, within the confines of hate speech, libel law, and so on. But you are not entitled to have your voice artificially amplified by technology.
The way Facebook and other platforms approach this problem is:
We’ll wait and see and figure out a problem when it emerges. Every other industry has to have minimum safety standards and consider the risks that could be posed to people, through risk mitigation and prevention.
There are right now some objectively disprovable things spreading quite rapidly on Facebook. For example, that Covid does not exist and that the vaccine is actually to control the minds of people. These are all things that are manifestly untrue, and you can prove that.
However, algorithms are much more prevalent than that- the Apple Face ID algorithm decides whether you are who you say you are.
Algorithms limit people’s worldview, which can allow large population groups to be easily controlled. Social Media algorithms tuned to your desires and want’s ensures that everything on your feed will be of interest to you without you knowing what data these algorithms use and what they aim for.
Conclusion.
We are already living with large AI platforms that are monopolizing the fruits of globalization with billions being left behind.
With us accepting this as if natural.
It will be too late when we are asking ourselves. What’s more valuable – intelligence or consciousness?Then ask yourselves what happens to society, politics, and daily life when non-conscious but highly intelligent algorithms know us better than we know ourselves?
Whatever view one takes on artificial intelligence ethics.You can rest assured that we will see far more nut cases blowing themselves up, far more wars over finite resources, with vast movements of people.
We have to remember that self-regulation is not the same as having no regulation.
Of course, the loudest arguments for and against something often have one thing in common. They are often made by people with no desire to compromise or understand the other side.
I think self-regulation, in and of itself contemplates people in power, deciding how they will act.
We have to accept from history that we cannot possibly predict all adverse consequences of technology and that’s because it is not just technology that has adverse consequences, but the context in which is applied,
It is impossible to regulate AI while thinking about all of its potential adverse consequences. The seeds for harm at the design stage, or at the development stage, or at the deployment stage.
We don’t have to wait for the technology to become an application before we think of regulating it effectively.
There is a need to strengthen specific provisions to safeguard individual liberty and community rights when it comes to inferred data. There is a need to balance the trade-offs between the utility of AI and protecting privacy and data.
Self-regulation within the AI industry may not be enough since it may not solve the massive differential between the people developing the technology and the people affected by it. Machine learning is the next step that they are aiming for, with the algorithms deciding the input and outputcompletely.
Inherent political and economic power hierarchies between the state and citizens and within the private sector need to be addressed because the promise of globalization is a lie when it comes to AI and prosperity for all.
Algorithms are being used in an ever-growing number of areas, in ever-increasing ways, however, like humans, they can produce bias in their results, even if unintentional. We are all becoming redundant with biotechnology becoming only available to the riches of us.
I don’t think that AI per se can be regulated because today it is AI, tomorrow it will be Augmented Reality or Virtual Reality, and the day after tomorrow it may be something that we can’t even think of right now.
So it is important to have checks and balances in the use and access to AI that go beyond just technological means.
Why?
Because they are also moving into areas where the benefits to those applying them may not be matched by the benefits to those subject to their ‘decisions’—in some aspects of the criminal justice system, for example.
However, technology companies are not all the same, and nor is technology the only part of the media ecosystem.
It is essential to ensure a whole society response to tackle these important issues.
You could require algorithms to have a trigger TO SHUT OF – to stop misinformation or terrorist groups using social media as a recruiting platform.
BUT who defines what counts as misinformation?
It is no longer possible for humans to fact-check so the only course of action is a world Independent Universal Algorithm that is designed to establish fairness.
While “fairness” is much vaguer than “life or death,” I believe it can – and should – be built into all AI using their algorithm.
Therefore every Social network should display a correction to every single person who was exposed to misinformation if independent fact-checkers identify a story as false.
(Google’s search algorithm is more closely guarded than classified secret documents with Google Algorithm’s that now owns most of the largest data sets in the world stored in its cloud.)
——————–
We now have algorithms fighting with each other for supremacy on the market, prey on other algorithms in order to blunder the world exchanges for profit to such an extent that they now effectively in control of capitalism. Take for instance, when someone says algorithmic trading, it covers a vast subject not just buying and selling large volumes of shares automatically at very high speeds by unsupervised learning algorithms.
There are four major types of trading algorithms.
There are:
Execution algorithms
Behavior exploitative algorithms
Scalping algorithms
Predictive algorithms
Transparency must be a key underpinning for algorithm accountability.
Why?
Because it will make it easier for the decisions produced by algorithms to be explained.
(The ‘right to explanation’ is a key part of achieving accountability and tackling the ethical implications around AI.)
We are only on the outskirts of mind science that presently knows little about how the mind works never mind consciousness. We have no idea how a collection of electric brain signals creates subjective experiences however we are conscious of our dreams.
99% of our bodily activities take place without any conscious feelings.
As neuroscientists acquired more and more data about the workings of the brain, cognitive sciences, and their stated purpose is to combine the data from numerous disciplines so as better to understand such diverse phenomena as perception, language, reasoning, and consciousness.
Even so, the subjective essence of “what it means” to be conscious remains an issue that is very difficult to address scientifically.
To really understand what is meant by the cognitive neurosciences, one must recall that until the late 1960s, the various fields of brain research were still tightly compartmentalized. Brain scientists specialized in fields such as neuroanatomy, neurohistology, neuroembryology, or neurochemistry.
Nobody was yet working with the full range of investigative methods available, but eventually, the very complexity of the subject at hand-made that a necessity.
The first problem that arises when examining consciousness is that a conscious experience is truly accessible only to the person who is experiencing it. Despite the vast knowledge we have gained in the field of mathematics and computer science, none of the data processing systems we have created needs subjective experiences in order to function.
None feel pain, pleasure, anger, or love.
These emotions are vanishing into algorithms that are or will have an effect on how we see the world but also how we live in it.
If not address now all moral and political values will disappear, turning consciousness into a kind of mental pollution. After all, computers have no minds.
Take images on Instagram they can affect mental health and body image.
You might say so what that has always been the case. And you would be right up to now but because of Covid-19 government has given themselves wide-ranging powers to collect and analyze data, without adequate safeguards.
If we are not careful they will have no notion of self, existing only in the present unaware of the past or future, and therefore will be unable to consciously plan for future eventualities.
Unconscious algorithms in our brains rather than conscious images in a mind.
If you are using a smartphone, it indirectly means that you are enjoying the AI knowingly or unknowingly. It cannot be modified unknowingly or can’t get disfigured or breakdown in a hostile environment.
We should not be regulating technology but Artificial Intelligence.
It is so complicated in behavior we need to be regulated it at the data level.
In lots of regulated domains, there is this notion of post-market surveillance, which is where the developer bears the responsibility of how the technology developed by them is going to be used.
As William Shakespeare wrote in – As you Like it.
” All the world is a stage, and all the men and women merely players, they have their exits and entrances. ”
Sadly with AI, Machine Learning Algorithms no one knows or for that matter will ever know when they enter or exit.
Probably like AI learning is actually an ongoing process that takes place throughout all of life. It’s the process of moving information from out there — to here. Unfortunately with the brain, has its own set of rules by which it learns best, unlike AI, the information doesn’t always stick. Together, we have a lot to learn.
Humanity is in contact with humanity.
All human comments appreciated. All like clicks and abuse chucked in the cloud bin.
social media oligarchy where the richest participants are allowed to spread dangerous
.
(FUNDAMENTAL FIFTEEN MINUTE READ. TO CHANGING THE DIRECTION THE WORLD IS GOING IN)
This virus has no vaccine against it, it extracts data about our behaviors and using it to manipulate us. It flourishes on social media that preys on the most primal parts of your brain.
You sign up to it with the terms and conditions when you get online with Twitter or Facebook, Google, and more.
Companies like Facebook and Google have corporate goals and interests that are backing us into an untenable social framework, where these monopolies own and operate the Internet, outside societal influences, and democratic control, extracting data on a massive scale.
They own your content in precise ways, and they have precise aims for your content.
As well, and, most of the time, treating our private lives as raw material for their profit.
Their algorithms are engineered to amplify the most extreme, angry, toxic, content with the intent to maximize data extraction thereby creating a huge societal asymmetry of knowledge and power – a whole new dimension of inequality.
WE ARE LEARNING THE HARD ABOUT THEIR DESTRUCTIVE EFFECTS – the election of Donald Trump, the Arab Spring, Promoting Popularism, false news on everything, from climate change to covid-19.
WITH THE CURRENT STATE OF AFFAIRS IT IS INTOLERABLE TO ALLOW MISFORMATION TO BE SPREAD WILLY NILLY WITHOUT VERIFICATION OF THE TRUTH.
This commercial surveillance has to stop because the boundaries between the virtual and the real world are melting.
We the people should have the right to decide what becomes of data and what remains private. What data is sharable and what purpose data should be used for.
WHY?
BECAUSE OUR PUBLIC DISCOURSE RULED BY SOCIAL MEDIA IS BEING RULED BY A HANDFUL OF PEOPLE FOR THE SAKE OF THEIR PROFIT.
If we don’t in the not so distant future you will see algorithms with self-awareness or worse still self-aware robots.
Instead of massive concentrations of data to manipulate our commercial and political behavior, data becomes a critical resource for people and society to ensure we remove inequalities in society.
There is no room tweaking any of this to get us where we need to go.
Let’s not delve into whether social media are a boon or bane for society. Instead, let’s appropriate social media and use it as an extension of ourselves to reach out to others, and not as a replacement for our physical offline relationships…
Unfortunately, our political discourse is shrinking to fit our smartphone screens and it is too late to regulate or pass laws governing the use of Algorithms. Only the threat of the very large fines will get these platforms and the people behind to concentrate on this in an appropriate way.
Because the formulaic quality of social media is well suited to the banter it appears these days that you’re only as relevant as your last tweet.
WE NEED A FUNDAMENTAL WORLD RESET WITH AI TO TETHER INDUSTRIAL CAPITALISM TO EQUALITY NOT INEQUALITY.
———————–
Facebook is basically an advertising company; they exist to make money, like all companies.
Even though Facebook has joined WHO and UNIFC to supply accurate information about covid-19 vaccines misinformation still finds a way on to social media where it combined to make a whirlpool of misinformation.
For example. A post like this.
10 years from now you will hear commercials that say ” if you took the Covid-19 vaccine between 2020-2021 you may be entitled to compensation”
———————–
The world is experiencing dramatic events that are leaving their mark not only on our society and our economy but on each and every one of us.
On the plus side of Ai technology, machine-learning algorithms are helping researchers understand the virus, identify the regions of the world with the highest contagion rates, and forecast the capacity needs of national health systems, with the aim—among others—of minimizing fatalities in the COVID-19 pandemic.
These algorithms can identify patterns of concentration, contagion rates, hidden similarities among cases, and, in general, allow for the aggregation of valuable knowledge that provides a more accurate global picture of the pandemic. More importantly, such algorithms can be used to protect communities that might be more vulnerable. For example, if an elder-care facility is located in an area with a high concentration of contagion, it should receive special attention to prevent unnecessary fatalities.
Prediction algorithms, together with fine-grained simulations can be used to forecast the evolution of the crisis.
For all these outcomes to be reliable, an important precondition is the trustworthiness of the data used with the algorithms.
——————–
Social media is run by algorithms, programs that spit out the things you see online, working in the background to come up with the things you see.
The interest of the corporation is fueling the content that you’re seeing.
However, when we are talking about algorithms on the internet or social media, you’re talking about people’s data going into a system and reworked preferences that come from that data input coming out. So you’re seeing the same sorts of things again and again when you’re expressing your preferences online.
So clicking on Google, YouTube, Twitter, the Facebook which are reinforcement systems based on existing preferences is about giving anthropomorphic agency to something that really doesn’t make decisions in the same way that we do.
Are they giving us beneficial moments, or making actual choices for us?
The question is if algorithms just show us what we want, can they push us in different directions.
Think about it in terms of what the algorithm wants and how it’s treating us by personifying the algorithm.
To sum up.
They are inescapable and encrypted in individuals’ online lives constantly, ‘making autocratic decisions…to produce a single output and agonistic in influencing individuals becoming a key site of power in the contemporary mediascape with the ability to, ‘shape social and cultural formations.
To date, we as individuals have granted algorithms the, ‘almost unimaginable power to determine what we see, where we spend, how we perceive.
Their power seems to be located in the mechanics of the algorithm.
However, it is in the hands of the individual to modify their opinions and perspectives to what has been put in order for them.
Every algorithm falls under a certain class.
Basically, they are-
1) Brute force.
2) Divide and conquer.
3) Decrease and conquer.
4) Dynamic programming.
5) Greedy algorithm.
6) Transform and conquer.
7) Backtracking algorithm.
All human comments appreciated. All like clicks and abuse chucked in the bin.
Shopping used to be a social activity but Covid -19 pandemic driving up demand for online shopping has and is creating a perfect storm for retailers forcing them to radically rethink what they need to do to remain profitable.
It is said that we are more connected with the internet rather than been isolated.
With smartphones verbally this is true but we are in a world that is disconnecting its self from genuine social contact onto platforms run by algorithms for Profit to the detriment of sensibility, and sustainability of the planet we live on.
(An algorithm is a series of instructions telling a computer how to transform a set of facts about the world into useful information. The facts are data, and the useful information is knowledge for people, instructions for machines, or input for yet another algorithm.)
We are now living in a world where algorithms, or software buying agents, “go shopping” on our behalf.
Every piece of technology that you touch involves many algorithms. They live in our computers and dictate our digital lives at random invisibly existing in the abstract.
While they fully automate the shopping process, they are capable of keep customers stuck
to one retailer and one product.
They can make thousands of calls or website visits a day.
It’s why people’s Google Search results will look different when they’re looking up things in different parts of the world, or why the ads that follow you across the web are different from your friends’. You never really see the complexities at work, simply the results.
This sounds very much like a quasi-monopoly.
In a world where algorithms make shopping decisions on our behalf, all
bets are off that shopping will ever return to the high street.
Humans stand no chance against bots when trying to access products and services that might be in demand.
The longer the lockdowns- the more driven demand increasingly via smartphones.
The emerging Economy of Algorithms, where software agents act on our behalf, has the potential of dramatically changing the way we live, work, and think.
We need clear rules that govern the behavior of software buying agents.
We also need a coordinated approach for software buying agents to disclose who they are, in situations where they can be confused for a human.
We need regulations governing profit-seeking algorithms with control of algorithmic trading. With the right mechanisms for the protection of competition in the markets, regulating access to products and services, and enforcing minimum quality standards of algorithms that shop on our behalf.
Shopping ads are known to produce well over 85% of retail paid Google search clicks. As such, they routinely produce a 400 to 1000% return on cash spent on ads.
Google Shopping entails how to get your product types featured on the nifty Product Ads on Google’s page search results.
Today the majority of stock market transactions are fully automated and executed by algorithms.
AMAZON Net profit roughly tripled to $6.33 billion. Its advertising business, reported $5.4 billion in sales, a 51% jump.
The real reasons that online shopping is replacing conventional shopping habits are.
Reduced overheads expand your market beyond local customers.
With online shopping, you can compare prices from hundreds of different vendors.
No pressure sales.
There are no fixed hours to shop.
One does not have to get in a car, find or pay for a parking spot, get clamped, pay parking fees, pay tolls, get mugged, spend hours in traffic jams, or have a nice day.
Online stores want to keep you as a customer, so they may offer deep discounts, rewards, and cashback if you sign up for their newsletters.
The downside however is you can’t try things on. None of them offer the on-the-spot, take-home advantage that a physical store does.
And shipping costs, are sometimes even more than the cost of what you buy. In-store shopping has no need to charge extra for shipping.
Online sales now accounting for around one-quarter of the total retail market.
There is a clear need for greater speciation, specialization, and differentiation because the consumer is in the driver’s seat, enabled by technology to remain constantly connected and more empowered than ever before to drive changes in shopping behavior in both the physical store and digital retail landscape.
Retailers are still competing with each other but also face new competitors who have different operating models and cost bases and this rate of change is showing little sign of slowing.
The greatest danger that remains with on-line shopping is Privacy and security.
These are legitimate concerns for any online shopper. Your payment information could get stolen from the site or someone who works there could copy your bank details and use them later on their own purchases. It’s also hard to immediately recognize whether an online store is real or just there to scam you.
There are tons of online shopping sites where you can buy everything from plane tickets and flat-screen TVs to food, clothes, furniture, office supplies, movies, and lots more.
Paying attention to whether or not the site uses HTTPS.
Artificial intelligent algorithms will know everything about you- where you live, where and what and when you buy, how often, your likes and dislikes, your bank account, your wife, your children, your friends, infected or not.
If you want a life go and get it.
All human comments appreciated. All like clicks and abuse chucked in the bin.
With the current pandemic and economic depression, algorithmic entanglement will stratify the populations of countries with barely a question asked.
Now is the time to challenge and examine their underpinnings, by introducing a software program to examine every algorithm in order to establish whether it is a friend or foe. They must be audited yearly to log and access the contents of their programs, and to be issued with health certificates.
It is not possible to go back to test or analyze why decisions are made by algorithms.
They are learning from the environment surrounding them and once they learn we have no way of knowing to any degree – what rules and parameters they are following at which point we have no way of controlling them or knowing how they react with other algorithms.
You only have to look at the stock exchanges, where they are already trying to outwit search other.
So you can be certain that there is going to be a stock exchange crash not caused by the Economic Depression but rather algorithmic greed for profit.
At the moment it seems that while they are out of sight they are out of mind.
But as we are going to see with any covid-19 vaccine and its distribution, algorithms will create their own rules and inevitably polarize society as a whole.
Where the decision is taken by an algorithm (as to who gets vaccinated or how safe it is when the algorithm could be hacked.) is at stake. Apportioning responsibility to any particular segment of code will be almost impossible.
Because they have no knowledge of what they are even being judged on, they will look for supremacy over each other.
Neither the companies using them nor the people making them take responsibility for how they can wreck lives and reinforce stereotypes.
The people making the algorithms don’t take responsibility for users of their code and the people using algorithms place responsibility on the creators.
Self – regulation is no longer viable because the larger the environment into which they are embedding themselves, the more unpredictable they will become.
Indeed software engineers will soon be extinct.
America’s 45th president likes to tweet.
He does this because he sees it as a way to bypass the ‘dishonest media’. Regardless of how you may want the world to be, the learnings from bulk text feeds are as close as we can really get to how the world actually is.
We must open our eyes to the power of algorithms and how dangerous they can be when unchecked.
These issues are not strange. The software can play you for a fool, but we’re still in the early stages however, they are as of now present in our lives making idiotic shopping recommendations, misclassifying pictures, and doing other senseless things.
History is not a predictor of the future – but knowledge can be – we can’t rely solely on mining historic data to draw conclusions; we need to incorporate expert knowledge.
We are Tick Tocking and Clicking our way to no return.
All human comments appreciated. All like clicks and abuse chucked in the bin.
The present Covid-19 Pandemic might be warping our sense of reality however there is another pandemic that is shaping and will shape our future reality.
We – in many ways, things are way better than they were thanks to technology.
We can work from anywhere because we have the Internet and we have Zoom and all of those platforms.
If you are able to say technology, on the whole, has done well, it probably means you’re in a fairly privileged position.
There’s still a huge digital divide.
Even – there are billions of people who don’t have access to the Internet.
On paper, algorithms sound like the pinnacle of efficiency, but as they’ve become more ubiquitous, there’s a difference between potential and reality both must be separate for the survival of democracy and the forthcoming distribution and administration of any covid-19 vaccine worldwide.
The reality is that Algorithms will be used to distribute and decide who will get the Covid-19 vaccination.
When it arrives algorithms will continue to reflect the biases that it has been and is being trained into machines that are learning a representation of the world that is skewed.
Some will say that Data is neutral. It’s just numbers. It’s just data but the past dwells within our algorithms and the flaws that are in our technology are what’s the algorithm’s information it’s taking in.
I am not just talking about the U.S. presidential election in a few day’s time.
We have already seen artificial intelligence being used in voting or politics how they extend beyond the realm of computer vision.
If we’re defining success by how it’s looked like in the past and the past has been one where men like Donal trump were given an opportunity to Twitt falsehoods, spreading them with the aid of Facebook and others it’s no wonder who gets hired or fired?
Do you get that loan? Do you get insurance? Do you and I pay the same price for the same product purchased on the same platform?
Automating inequality.
Before a human looks at your resume, it gets vetted by algorithms written by software engineers who are involved in the system (without changing the system itself he the engineer is still going to reproduce algorithmic bias and algorithmic harms.)
Any sorts of algorithmic tools that are intended to be used, again, have to be verified for nondiscrimination before it’s even adopted.
We now have an AI system – right? – that can classify skin cancer as well as the top dermatologists but to change society to change what AI is learning in order to create what can be realized is going to be trusted into our lives by the inevitable economic depression.
So a Covid-19 vaccine is going to transfer real power into the world of Data and we can’t fight the power you don’t see, you don’t know about.
All human comments appreciated. All like clicks and abuse chucked in the bin.
We should be warier of their power. People don’t need to understand something to do it. The algorithm does it for them.
They’re are increasingly determining our collective future.
We are already halfway towards a world where algorithms run everything.
With the current Pandemic, it is, not who will survive, but how and at what cost, not just to economic systems but to our hard-earned freedoms.
Times will be rough as society tries to come up with an appropriate balance between who gets the jab.
When algorithms involve machine learning, ( like track and trace ) they ‘learn’ the patterns from ‘training data’ which may be incomplete or unrepresentative of those who may be subsequently affected by the resulting algorithm.
Modern algorithm developers are focusing on creating algorithms that learn and develop with the data that they encounter. Machine learning is the next step that they are aiming for, with the algorithms deciding the input and outputcompletely.
One of the world’s most used algorithms right now is the search engine algorithm of Google. It determines what people find in their internet searches and is the basis of the entire SEO industry, where people try to ensure that they show up in the top spot.
However, algorithms are much more prevalent than that- the Apple FaceID algorithm decides whether you are who you say you are.
Social Media algorithms tuned to your desires and want’s ensures that everything on your feed will be of interest to you without you knowing what data these algorithms use and what they aim for.
(Google’s search algorithm is more closely guarded than classified secret documents)
It is very convenient for people to follow the advice of algorithms if your high-frequency trading on the stock exchange but some algorithms limit people’s worldview, which can allow large population groups to be easily controlled.
This is why many of the issues raised in this post will require close monitoring, to ensure that the oversight of machine learning-driven algorithms continues to strike an appropriate and safe balance between recognizing the benefits (for healthcare and other public services, for example, and for innovation in the private sector) and the risks (for privacy and consent, data security and any unacceptable impacts on individuals).
Algorithms are being used in an ever-growing number of areas, in ever-increasing ways, however, like humans, they can produce bias in their results, even if unintentional.
They are bringing big changes in their wake; from better medical diagnoses to driverless cars, and within central governments where there are opportunities to make public services more effective and achieve long-term cost savings.
However, the Government should produce, publish, and maintain a list of where algorithms with significant impacts are being used within the Central Government, along with projects underway or planned for public service algorithms, to aid not just private sector involvement but also transparency.
Governments should not just simply accept what the developers of algorithms offer in return for data access.
This is now an urgent requirement because partnership deals are already being struck without the benefit of comprehensive national guidance for this evolving field.
Given the international nature of digital innovation, governments, should establish audits of algorithms, introducing certification of algorithms, and charging ethics boards with oversight of algorithmic decisions.
Governments should identify a ministerial champion to provide government-wide oversight of such algorithms, where they are used by the public sector, and to co-ordinate departments’ approaches to the development and deployment of algorithms and partnerships with the private sector.
Transparency must be a key underpinning for algorithm accountability.
Why?
Because it will make it easier for the decisions produced by algorithms to be explained.
(The ‘right to explanation’ is a key part of achieving accountability and tackling the ethical implications around AI.)
Why?
Because they are also moving into areas where the benefits to those applying them may not be matched by the benefits to those subject to their ‘decisions’—in some aspects of the criminal justice system, for example.
Because algorithms using social media datasets like ‘big data’ analytics, need data to be shared across previously unconnected areas, to find new patterns and new insights.
It’s not COVID-19 that will fuck us all its Profit-seeking algorithms.
All human comments appreciated. All like clicks and abuse chucked in the bin.