How Can Artificial Intelligence Be Dangerous?
Since the end of 2022, there have been many concerns over Artificial Intelligence, or AI, playing havoc and causing sudden changes in the world as we know it today. The trigger was ChatGPT-3, launched by US-based company Open AI, which threw open this AI resource to the public. Over the months, ChatGPT-3 has evolved, and now, as I write this article, we have ChatGPT-4, while newer variants are expected.
At the same time, other companies have also launched their own versions of AI-based resources. These include Adobe, that’s come up with a design resource using AI, Deep Brain AI for making videos, Soro- which is yet to be launched for the public, and Rytr.AI, which helps people write books and other text with the help of Artificial Intelligence. Then, we have AI-based voice generators, image generators, and so on.
In the coming days and months, we’ll see more advanced models of AI-based resources that can perform almost every task that needs human intelligence being launched in the market.
This brings us to the question: can AI be dangerous? AI isn’t the bogeyman that everyone fears. It’s nothing to be feared. However, there are some concerns over AI being abused for the wrong purposes for the wrong reasons. This could make it dangerous.
In this article, I will discuss the various elements where AI could prove dangerous. However, first, I will allay your fears that AI will dramatically change or alter this world.
Reasons Not to Fear AI
There are several reasons why we needn’t fear AI. While there are doomsday predictions by respected research and audit companies that AI could wipe out as many as 800 million jobs, these are grossly misinterpreted effects of AI.
At the onset, let’s clarify one thing: AI can’t and wouldn’t be able to replace human intelligence in any way whatsoever. And those screaming from rooftops that AI is something that will change the way we work or live are nothing but exhibitionists of sorts trying to achieve some personal agenda. I have strong reasons to make such a vitriolic comment. And these are facts.
Let’s dispel these fears and myths that AI is dangerous, step by step.
National Security Interests
According to a Goldman Sachs report, AI could affect some 300 million US and European jobs. This necessarily doesn’t mean unemployment. It could involve equipping workers with AI skills or redeploying the workforce elsewhere to jobs that don’t require or can’t be done by AI. Yet, millions of women and men could indeed lose jobs.
Synergia Foundation and other organizations report that massive unemployment is a national security threat to any country, particularly the US and Europe. It leads to higher crime rates, terrorism, and other undesirable effects that the US and its European counterparts would surely wish to avoid.
Bear in mind that the US government holds in bud anything that it views as a threat to its national security. And suppose AI proves to threaten national security. In that case, people can only predict what steps the US government and other countries will adopt to safeguard their interests.Â
Regulations over AI
As AI gets more sophisticated, we can expect almost every country to regulate its use and develop strict laws. Many countries are already drafting such laws, which can be amended anytime the technology evolves. While some countries permit limited uses of AI, others could restrict its uses to select groups only, such as industries and financial organizations.
Regulations could limit individuals’ autonomy over using AI-based tools and resources. Using or owning AI resources without proper authorization could be considered illegal. Governments worldwide are biding time to see how AI develops by mid-2024 before enacting any laws.
Cost of AI
The cost of AI resources is also a deterrent to its widespread proliferation. For example, a monthly subscription to ChatGPT-4 comes for about $20, while others for specific applications such as video or image creation, voice generation, and others cost more.
The cost factor puts AI-based resources out of the reach of ordinary persons. People will spend on AI resources if they have a proportional income or interest fulfillment through such an expense.
The cost factor is further complicated because all AI models need frequent updates to stay in the market and beat the competition. This spiraling cost of upgrades would eventually be passed to end users, sending monthly or annual subscription prices skyrocketing.
Aversion to AI Content
A glance at job posts seeking content writers on freelance portals such as FlexJobs.com or Upwork.com indicates that clients want original content written by humans with human intelligence. Most buyers specify this requirement explicitly. This could solace content creators since AI won’t take away their jobs.
Instead, the focus would be improving output by expediting research through AI-based tools. Another reason is that book lovers are averse to reading AI-generated or AI-written books, regardless of how compelling the story is.
That’s because every AI writer, including ChatGPT-4 and its variants, only looks backward into history or what’s been written earlier. Hence, AI-written books need fresh and futuristic perspectives and themes.
Outdated Data
Coming back to the point, all AI models work on outdated data, and updating it often takes several weeks. Take, for instance, the latest version of ChatGPT-4. It has been updated only up to April 2023. Hence, your tasks with this AI tool don’t give you the freshest information. Instead, ChatGPT-4 algorithms always delve into the past.
This means your research could be incomplete or even factually incorrect at times. This means anyone doing research might end up getting the wrong data, leading to a waste of precious time and effort.
Can AI Be Dangerous?
So far, we’ve seen various reasons why AI will have some success, albeit limited. AI will never be able to replace human intelligence. However, AI can improve our human lives if used prudently and with proper judgment.
Yet, there are areas where AI could prove dangerous.
Banking & Finance Sector
AI could prove dangerous for the financial sector. AI can enable fraudsters to create fake profiles, names, or fictitious entities to swindle billions of dollars in scams. Thankfully, no fraudster has used AI to commit fraud because traditional methods are paying them off. Banks and financial institutions have adopted AI-based fraud detectors.
However, whether they would be helpful against fraud committed using simple and unknown techniques remains to be seen. Here, we have to remember that scammers always find and use loopholes in existing systems. Since AI has to be upgraded frequently, it might not be able to detect the latest modus operandi used by one or a group of fraudsters.
Autonomous Weapons
Autonomous weapons are those programmed to seek and kill their targets once deployed. AI could make these weapons deadlier. We can well imagine what would occur if a rogue state or terrorist groups could convert conventional weapons into autonomous ones using whatever AI programs and technology they can gain.
It is well known that large terrorist organizations have billions of dollars in funding and ghost companies.
They would be able to afford and buy AI technology to unleash terror or aggression against another country. We have already seen some rogue countries develop very sophisticated autonomous weapons to threaten neighbors and the world.
Stock Markets
AI, left unregulated, poses a significant threat to the stock markets. Any rogue stock trader with knowledge of AI and proper AI resources can unleash havoc on the stock market. They can program buy and sell orders using algorithms to execute at specific times.
Such sudden large-volume trades can throw stock markets in one country and worldwide into a tailspin, causing investors to lose billions of dollars. The same applies to currencies, commodities, and other speculative markets where AI-based algorithms can be trained to manipulate trading quickly.
The main threat arises from the speed at which AI algorithms execute trades. This happens since AI processes data such as drops in stock prices speedily and places orders before human minds can even process the price fall.
Literature
Literature would be the greatest casualty of AI-based writing resources. So far, literature was the sole domain of persons genuinely interested in writing and had inherent skills to weave and write stories. For example, authors such as Enid Blyton, JK Rowling, Daniel Yergin, and Robert Ludlum used their imagination, creativity, language, and style to write all their books.
Unfortunately, AI-based resources such as ChatGPT-4 enable anyone without the basic knowledge of literature or language to write a book within a few hours. This is further complicated by the ability of writers who use AI to have their AI-written books published through self-publishing platforms.
Deepfakes and Porn
Deepfake videos and images of several celebrities and prominent personalities have already been found on the Internet. These are made using AI. Unless regulated and utilized responsibly, AI-based resources pose a serious threat to every human, regardless of age.
AI-based image and video creators can be used to create Deepfake porn or even juvenile porn. It can blackmail sincere people with fake videos and audio clips. Furthermore, AI-based audio and videos will badly affect the cinema industry, leading to diminishing importance for the skills of actors and actresses.
Wrap Up
While there are several more areas where AI can be dangerous, one must remember that the world progresses only through the adoption and use of the latest technologies. We can be confident that while millions worldwide are worried about AI’s adverse or dangerous effects on their lives, concerned authorities and companies developing AI resources will take adequate measures to ensure the technology isn’t misused.
Similar fears were expressed when the Internet and email began proliferating worldwide. However, most of us use the Internet for legitimate purposes, though a handful of criminals devise wrong uses.