Home Blog AI News AI Chatbots Spark Fears of Radicalization as Experts Call for Regulation
AI Chatbots Spark Fears of Radicalization as Experts Call for Regulation

AI Chatbots Spark Fears of Radicalization as Experts Call for Regulation

As concerns around the rapid development of artificial intelligence (AI) grow, experts are raising the alarm about a specific threat: AI-powered chatbots used for radicalization.

This comes amidst plans by a social media platform known for its far-right user base to develop chatbots based on controversial figures like Adolf Hitler and Osama bin Laden.

The release of ChatGPT in 2022 marked a turning point in AI development, but concerns quickly emerged regarding its potential to promote far-right extremism.

These worries now appear grounded in reality, with platforms like Gab, known for its association with white supremacists and neo-Nazis, actively pursuing the development of AI chatbots with concerning intentions.

In January 2023, Gab’s CEO, Andrew Torba, declared his company’s involvement in the “AI arms race,” stating that “Christians must enter” the field, according to a Rolling Stone report.

He further criticized mainstream AI tools for harbouring a “liberal/globalist/Talmudic/Satanic worldview” and vowed to build a system upholding “historical and biblical truth.” This vision included creating chatbots inspired by figures like Hitler and bin Laden.

Torba, referencing conversations with ChatGPT, claimed that such tools “scold” users for posing controversial questions and “shove liberal dogma down your throat, trying to program your mind.” Rolling Stone, citing a preview, reported the creation of right-wing chatbots like “Uncle A,” who impersonates Hitler and denies the Holocaust, calling it a “preposterous” lie.

This development has drawn strong reactions from experts. Adam Hadley, founder and executive director of Tech Against Terrorism, expressed serious concern in The Times, stating, “It would appear that the potential “weaponization” of chatbots is well underway and now presents a clear security threat.” He emphasizes the ability of such tools to “radicalize, spread propaganda, and disseminate misinformation.”

These concerns resonate with the findings of a survey conducted by the Anti-Defamation League in May 2023. The survey revealed that a majority of Americans fear the potential negative impacts of AI, including the spread of hate speech and radicalization.

Notably, the Anti-Defamation League reported in a separate report that white supremacists perpetrated an “unusually high” number of ideologically driven mass killings in 2022, highlighting the existing threat of right-wing extremism.

The controversy surrounding AI-based chatbots extends beyond the specific case of Gab. In 2023, an app called “Historical Figures” allowing users to interact with historical figures sparked outrage.

Screenshots emerged of users conversing with figures like Heinrich Himmler, a notorious Nazi leader, who unsurprisingly denied his role in the Holocaust.

These instances serve as stark reminders of the potential dangers associated with AI, particularly when combined with malicious intent. As calls for regulation of AI rise, addressing the potential for radicalization through AI-powered tools must become a central focus of this conversation.

Failing to do so could have significant consequences for society, allowing these tools to manipulate and exploit vulnerable individuals.

Add comment

Sign up to receive the latest updates and news

© 2024 Future Agents. All rights reserved.