Our ability to think could be harmed by AI systems.

AI algorithms
AI algorithms could disrupt our ability to think

In a report to Congress this year, the US National Security Commission on Artificial Intelligence found that AI is "world-altering." AI is also changing people's minds because the AI-powered machine is now the mind. This will be a reality in the 2020s. We are learning to rely on AI for so many things as a society that we may become less inquisitive and more trusting of the information AI-powered computers deliver. In other words, we may already be outsourcing our thinking to computers, resulting in the loss of a component of our agency.

The trend toward more AI use shows no signs of slowing down. According to the Stanford Institute for Human-Centered Artificial Intelligence, private investment in AI will reach an all-time high of $93.5 billion in 2021, more than doubling from the previous year. In addition, in 2021, the number of patent filings connected to AI innovation will be 30 times higher than in 2015. This is evidence that the AI gold rush is in full swing. Fortunately, much of what AI does will be positive, as proven by AI's role in solving scientific difficulties ranging from protein folding to Mars exploration and even animal communication.

Machine learning and deep learning neural networks are used in the majority of AI applications, which necessitate big datasets. This data is acquired from human choices, preferences, and selections on everything from apparel and literature to ideology for consumer applications. The applications deduce patterns from this data, allowing them to make educated guesses about what we might need, want, or find most intriguing and engaging. As a result, computers are offering us a variety of beneficial tools, such as recommendation engines and chatbot support that are available 24 hours a day, seven days a week. Many of these apps appear to be useful — or, at the very least, harmless.

AI-powered apps that provide driving directions are one example that many of us can relate to. These are unquestionably beneficial in preventing people from being disoriented. I've always been adept at following directions and reading maps. I have no trouble traveling to a spot without assistance after driving there once. But now I use the app on almost every drive, even to areas I've visited before. Maybe I'm not as sure of my instructions as I thought; maybe I just want the company of a soothing voice telling me where to go, or maybe I'm getting too reliant on the apps for guidance. I'm starting to fear that if I didn't have the app, I wouldn't be able to find my way.

We should probably pay more attention to this not-so-subtle trend in our reliance on AI-powered apps. We are already aware that they compromise our privacy. And if they reduce our human agency as well, it might have major ramifications. If we trust an app to determine the quickest route between two points, we're more likely to trust other applications and, in the not-too-distant future, we'll increasingly live our lives on autopilot, just like our cars. Will we lose our ability to establish our own thoughts and interests if we automatically assimilate what we are provided with via news feeds, social media, search, and suggestions, potentially without questioning it?

The pitfalls of groupthink in the digital age

How else could the QAnon conspiracy theory that there are Satan-worshiping pedophiles in the US government, industry, and media out to harvest children's blood be explained? The conspiracy theory began with a series of posts on the 4chan message board, which quickly spread across other social media platforms thanks to recommendation engines. We now know that the original posts were most likely written by a South African software developer with minimal familiarity with the United States, according to machine learning. Nonetheless, the number of people who believe in this notion continues to rise, and it now approaches the popularity of certain mainstream religions.

According to a report in the Wall Street Journal, as the brain becomes more reliant on phone technology, intelligence deteriorates. The same is likely true for any information technology in which content is delivered to us without our needing to learn or discover it on our own. If that's the case, AI might create a self-reinforcing syndrome that simplifies our options, satisfies our immediate needs, diminishes our intelligence, and locks us into an existing worldview by presenting content suited to our individual likes and biases.

In his new book The Loop, NBC News contributor Jacob Ward claims that AI apps have ushered us into a new paradigm, one in which the same dance is repeated. "Data is sampled, results are processed, a condensed list of options is presented, and we choose once more, repeating the cycle." "Using AI to make decisions for us will end up rewiring our brains and our society... we're trained to accept what AI tells us," he continues.

The Conformity Cybernetics

Ward argues that our alternatives are limited because AI presents us with options that are comparable to what we have favored in the past or are most likely to prefer based on our past. As a result, our future becomes more determined. The apps that are supposed to help us make better decisions could effectively freeze us in time – a type of mental homeostasis. "The world is such and such, or so-and-so simply because we tell ourselves that that is the way it is," Don Juan explains to Carlos Castaneda in A Separate Reality.

"The human brain is built to accept what it's given," Ward explains, "especially if what it's told matches our expectations and saves us unpleasant mental work." The positive feedback loop created by AI algorithms regurgitating our desires and preferences adds to the existing information bubbles, reinforcing our existing views, increasing polarisation by making us less open to different points of view, less able to change, and transforming us into people we didn't intend to be. This is essentially conformance to cybernetics, in which the machine transforms into a mind while adhering to its own internal algorithmic programming. As a result, we will become both more predictable and vulnerable as individuals and as a nation to the use of digital tools.

Of course, this isn't actually AI at work here. Technology is merely a tool that may be used to achieve a certain goal, such as selling more shoes, persuading people to adopt a political viewpoint, controlling the temperature in our homes, or communicating with whales. In its application, there is an implication of intent. We must insist on an AI Bill of Rights, as recommended by the US Office of Science and Technology Policy, in order to keep our agency. More importantly, we require a regulatory framework that safeguards our personal data and our right to think for ourselves as quickly as possible. The European Union and China have taken steps in this direction, and the present administration in the United States is moving toward similar movements. Clearly, now is the time. Now is the time for the United States to take this issue more seriously before we become non-thinking automatons.



Comments

Popular posts from this blog

Message Reactions are being rolled out to select WhatsApp beta users.

Free Website Speed Test Tool Online

Why SpaceX needs 42,000 satellites? Explanation of StarLink