close
close

HALKANO BORU: The intersection of terrorism and generative artificial intelligence: emerging issues

HALKANO BORU: The intersection of terrorism and generative artificial intelligence: emerging issues

As society becomes increasingly dependent on the Internet and technology, it is critical to recognize that terrorist organizations are also following this shift.

Organizations tend to adopt technology early, using new tools and platforms to further their nefarious plans.

Generative AI, also known as GenAI, is a branch of artificial intelligence (AI) that can generate various forms of data, including but not limited to photographs, videos, audio, text, and 3D models.

Extremist groups have initiated tests using artificial intelligence, namely generative AI, to create a stream of fresh propaganda.

Experts are concerned that these organizations’ growing use of generative artificial intelligence techniques could undermine efforts made by major technology companies in recent years to prevent access to their information online.

Terrorist organizations have begun using artificial intelligence-based tools to create propaganda content.

Recently, Islamic State terrorist support groups have carried out a social media ploy on Facebook and YouTube by posing on popular English and Arabic TV channels using voiceovers to deliver ideological and operational news as news to their followers.

Deepfakes, the result of a convergence in artificial intelligence manipulation techniques, have received significant attention due to their profound implications.

Deep fakes can be used for radicalization by exploiting the sensitivity of individuals’ emotions and ideologies.

Opposing organizations may alter records to fabricate celebrity support for extremist ideology, disseminate false evidence in support of extremist ideas, and promote propaganda that glorifies violence and incites acts of terrorism.

In addition, hostile groups have used artificial intelligence tools to spread extremist content.

Terrorist organizations are using generative artificial intelligence for a range of purposes beyond simple image manipulation.

One strategy used is the use of automatic translation systems to efficiently translate propaganda into many languages, as well as generating personalized messages on a large scale to enhance online recruitment efforts.

Artificial intelligence-enabled communication platforms, mainly chat apps, have the potential to become powerful tools for terrorists seeking to radicalize and recruit people.

The proliferation and rapid integration of advanced deep learning models demonstrated by generative artificial intelligence has raised concerns about the potential use of these AI tools by terrorists and violent extremists to enhance their activities in both the virtual and physical realms.

The threat of malicious use of generative AI can be grouped into three categories: digital, physical and political security.

Extremist groups are producing neo-Nazi video games, pro-ISIS TikTok posts, and white supremacist music.

Generative AI can effectively optimize these procedures and enable the rapid modification of pre-existing music and movies into realistic, hate-filled versions.

The current state of artificial intelligence confirms the rapid spread of generative AI.

According to Tech Against Terrorism, with the advent of the big language model, terrorist groups’ use of LLM-enabled content and editing could bypass and defeat existing content moderation technology such as hashing.

In McKinsey Global’s latest annual survey, organizations have deployed AI capabilities to explore the potential of Gen AI, and few firms are fully prepared for the widespread risks of Gen AI.

Technology corporations have taken steps to prohibit the creation of highly controversial information.

According to the Organization for Economic Co-operation and Development, there are more than 1,000 pieces of legislation and policy initiatives in the field of artificial intelligence from 69 countries, territories and the European Union (EU).

These efforts include the development of comprehensive legislation focusing on specific use cases, national AI strategies, and voluntary guidelines and standards.

However, despite all these policy initiatives, some actors have the ability to modify the data set used in generative AI models to selectively create and distribute malicious materials that still pose risks of hostile use.

Rule of the road

Every country is struggling to keep up with the ever-changing situation of online terrorist attacks.

Most countries are struggling with regulation to counter the threats, but regulation is aimed at managing the risks posed by social media platforms, not artificial intelligence.

A complete ban on artificial intelligence is not possible because AI is developed by the commercial sector, not the government.

To combat the overwhelming impact of Generative AI, terrorism and hostile groups, governments, the private sector, technology companies and individuals must take action.

First, governments should devote significant resources to evaluating advanced artificial intelligence technologies as a way to contain risks.

Second, governments and the private sector should develop generative AI tools to detect terrorist content based on semantic understanding of the content.

Third, educate the masses about the threat of generative AI. This can be achieved through the development of training programs on artificial intelligence and ethics.

To overcome the challenges of using generative AI to spread extremist propaganda, artificial intelligence and social media companies must take a full range of measures.

To successfully mitigate the threat, you need to go beyond basic deterrents like watermarks and use more advanced tactics.

It’s important to keep a close eye on how bad actors can bypass security measures or use artificial intelligence technologies to enhance influence tactics.

AI companies are now taking proactive measures, such as suspending accounts involved in malicious activity, placing restrictions on specific requests that contribute to the creation of malicious content, and issuing warnings to users about potential misuse through investments in enhanced oversight. behind their platforms.

However, there are many other actions that can be taken, such as collaborating with social media corporations to organize cross-sector hashing initiatives, including artificial intelligence firms, social media platforms, and even intelligence officials.

These proposed exercises should aim to replicate genuine exploits, promoting vulnerability discussion, defensive strategies, and improving countermeasures in regulated environments.

Increased collaboration between the corporate and government sectors, academia, high technology, and the security community will increase recognition of the potential abuse of AI-based platforms by militant radicals, thereby facilitating the creation of more advanced defenses and countermeasures.

By engaging in these collaborative efforts that bring together specialized knowledge, participants can develop a deep understanding of the possible risks associated with AI and create strong and uniform protections.