Begin typing your search...

    How extremist groups like 'Islamic State' are using AI

    How extremist groups like Islamic State are using AI
    X

    Representative Image

    Cathrin Schaer

    A video shows an episode of Family Guy, one of the world's best-known cartoon comedy shows, in which Peter Griffin, the main character, drives a van containing a bomb at gunpoint over the bridge. It is a clip from the episode "Turban Cowboy," which aired in 2013. However, the audio accompanying the video has changed, as Peter Griffin, normally voiced by show creator Seth MacFarlane, sings some unusual lyrics.

    "Our weapons are heavy, our ranks are many, but the soldiers of Allah are more than ready," he warbles in the song in his distinctive Rhode Island accent, meant to encourage followers of the extremist "Islamist State" group. The animation obviously isn't MacFarlane's latest satirical tune but just one illustration of how extremist groups use advanced computing or artificial intelligence (AI) to create content for their followers.

    The term AI covers a wide range of digital technologies and can mean anything from the faster processing of large amounts of digital data for analysis to what's known as "generative AI," which "generates" new text or visuals based on huge amounts of data. That's how this Peter Griffin song was created.

    "The rapid democratization of generative AI technology in recent years ... is having a profound impact on how extremist organizations engage in influence operations online," writes Daniel Siegel, a US researcher who analyzed how AI is used for malicious purposes in an article for the Global Network on Extremism and Technology in which he highlighted the Peter Griffin video.

    Over the past year, observers from a variety of extremist monitoring organizations have reported how IS and other extremist groups are encouraging followers to make use of new digital tools. In February, a group affiliated with Al Qaeda announced it would start holding AI workshops online, The Washington Post reported. Later, the same group released a guide on using AI chatbots.

    In March, after a branch of IS killed over 135 people in a terror attack on a Moscow theater, one of the group's followers created a fake news broadcast about the event and published it four days after the attack.

    Earlier this month, officers from Spain's Ministry of the Interior arrested nine young people around the country who had been sharing propaganda celebrating the IS group, including one man described as being focused on "extremist multimedia content, using specialized editing applications supported by artificial intelligence."

    "What we know about AI use today is that it works as a complement to official propaganda by both Al Qaeda and IS," says Moustafa Ayad, executive director for Africa, the Middle East and Asia at the London-based Institute for Strategic Dialogue (ISD), which investigates extremism of all kinds.

    "It allows supporters and unofficial support groups to create emotive content specifically used to galvanize the base of supporters around core concept." He added that the way it looks means that it may not be picked up by content moderators on popular social media platforms either.

    In fact, Ayad told DW that even the more ridiculous and unrealistic IS content is often enough of a novelty for followers to share it among themselves. None of this is surprising to longtime observers of the IS group. When the extremist group first came to prominence around 2014, it was already making propaganda videos with fairly high production values to intimidate enemies and recruit followers.

    "All this speaks to something the ISD has continually noted," Ayad noted. "Terrorist groups and their supporters continue to be early adopters of technology to serve their interests." But how dangerous is this sort of content really? After all, the fake news broadcast about the Moscow attack looks fake, and the Peter Griffin song isn't hurting anybody. Or is it?

    Monitoring groups have listed a variety of ways in which extremist groups could use AI. Besides propaganda, they could also use chatbots from large language models, like ChatGPT, to converse with potential new recruits, experts suggest. Once the chatbot has aroused interest, a human recruiter might take over, they say.

    AI models, like ChatGPT, also have certain rules written into their systems that prevent them from helping users with things like getting away with murder. However, these rules have proven unreliable in the past, and would-be terrorists might be able to override them to obtain dangerous information.

    There are also fears that extremists could use AI tools to undertake digital or cyberattacks or to help them plan terror attacks in real life.

    Experts argue that while AI has worrying potential in the hands of extremists, real life is still more dangerous. In a 2019 paper in the journal, "Perspectives on Terrorism," researchers examined the connection between how much propaganda the IS group put out and their actual physical attacks. There was "no strong and predictable correlation," they concluded.

    "It's similar to the discussion we were having about cyberweapons and cyber bombs around 10 years ago," says Lilly Pijnenburg Muller, a research associate and expert on cybersecurity at the Department of War Studies at King's College London.

    Never mind AI, today even rumors and old videos can have a destabilizing impact and lead to a flurry of disinformation on social media, she told DW. "States have conventional bombs that can be dropped, if that is their intention."

    "I don't know if, at this stage, the use of AI by foreign terrorist organizations and their supporters is more dangerous than their very real and graphic propaganda involving the wanton murders of civilians and attacks on security forces," the ISD's Ayad says.

    "Right now, the bigger threat is from these groups actually conducting attacks, inspiring lone actors or successfully recruiting new members because of their responses to the geopolitical landscape, namely the Israeli war on Gaza in response to October 7," he continued. "They are using the civilian deaths and Israel's actions as a rhetorical device for recruitment and to build out campaigns."

    DW Bureau
    Next Story