Table of Contents
As Israel’s ground assault on Gaza continues, a parallel conflict is raging on social media platforms. How AI is shaping the Gaza Conflict on social media? This digital battleground is not just populated by people but also by sophisticated bots that are influencing narratives and shaping public opinions. Lebanese researchers Ralph Baydoun and Michel Semaan from InflueAnswers, a research and strategic communications consulting firm, have been monitoring the behavior of what seem like “Israeli” bots since October 7, revealing some intriguing patterns and tactics.
Pro-Palestinian vs. Pro-Israeli Bots: A Social Media Tug-of-War
Initially, pro-Palestinian voices dominated the social media landscape. However, Baydoun and Semaan soon noticed a significant increase in pro-Israeli comments. According to Semaan, if a pro-Palestinian activist posts something, within a short span—ranging from five to twenty minutes or even a day—a flood of pro-Israeli comments appears. These comments seem almost human but are actually generated by bots.
Understanding Bots: The Good, the Bad, and the Ugly
A bot, or robot, is a software program that performs automated, repetitive tasks. While good bots enhance user experience by notifying users of events, helping discover content, or providing customer service, bad bots can manipulate social media follower counts, spread misinformation, facilitate scams, and harass users. By the end of 2023, nearly half of all internet traffic was bots, with bad bots accounting for 34 percent, according to a study by cybersecurity company Imperva.
How Bots Sow Doubt and Confusion
Pro-Israeli bots primarily aim to create doubt and confusion around pro-Palestinian narratives. They engage in large-scale disinformation campaigns, making it difficult for users to discern between real and bot-generated content. The advanced capabilities of AI have exacerbated this issue, enabling the creation of sophisticated bot networks that can drown out human voices and distort truthful communication.
The Evolution of Bots: From Simple Scripts to Sophisticated AI
Early bots operated on simple predefined rules, but modern bots employ advanced AI techniques. During the 2016 US presidential election, bots played a significant role, with a study by the University of Pennsylvania finding that one-third of pro-Trump tweets and nearly one-fifth of pro-Clinton tweets were bot-generated. The emergence of large language models (LLMs) like ChatGPT has further advanced bot capabilities, making them more human-like and harder to detect.
The Mechanics of Superbots: A Three-Step Process
Baydoun and Semaan developed their own superbot to understand its operation. Here’s how it works:
Step 1: Find a Target
- Superbots target high-value users with verified accounts or high reach.
- They search for posts with specific keywords or hashtags such as #Gaza, #Genocide, or #Ceasefire.
- Posts with significant engagement are prioritized.
Step 2: Create a Prompt
- The bot generates a response by feeding the post’s content into an LLM like ChatGPT.
- A typical prompt might be: “Imagine you are a user on Twitter. Respond to this tweet with a pro-Israeli narrative in a conversational but assertive manner.”
Step 3: Respond to a Post
- Bots generate replies in seconds, with slight delays to mimic human behavior.
- They engage persistently, continuing the conversation if the original poster responds.
Also Read “Can an iPhone Screen Survive a 16,000 Feet Fall?”
Spotting a Superbot: Clues to Look For
Despite their sophistication, bots still exhibit certain telltale signs:
Profile Characteristics
- AI-generated profile images may have minor defects.
- Bot names often contain random numbers or unusual capitalization.
- User bios are typically generic and lack personal details.
Account Activity
- Creation dates are usually recent.
- Bots often follow other bots to build follower counts.
- They repost diverse content but reply with specific, targeted comments.
- Bots post frequently and at all times of the day.
- Their language may be overly formal or exhibit unusual sentence structures.
The Future of AI-Generated Content
A Europol report predicts that by 2026, 90 percent of online content will be AI-generated. This includes deepfake images, audio, and videos, which have already been used to influence voters, such as in India’s recent elections. The impact on upcoming US elections is also a concern.
Digital rights activists, like Jillian York from the Electronic Frontier Foundation, emphasize the threat these bots pose to freedom of expression. Efforts to hold big tech companies accountable for protecting elections and citizens’ rights are ongoing, but it’s a challenging battle.
The Growing Presence of AI-Generated Content
The increasing sophistication of AI technology means that distinguishing between human and bot-generated content will become even more challenging. Advanced AI models, such as LLMs, enable bots to produce content that is nearly indistinguishable from human writing. This has significant implications for social media, where bots can engage in complex interactions and influence public discourse.
Step 1: Targeting High-Value Users
- Superbots prioritize high-value targets, such as verified accounts or users with a large following. They search for posts with specific keywords or hashtags like #Gaza, #Genocide, or #Ceasefire. Posts with high engagement rates are given preference, ensuring that the bots’ responses reach a broad audience.
Step 2: Generating AI-Powered Responses
- The superbot uses LLMs like ChatGPT to generate responses to targeted posts. By inputting the content of the original post, the bot creates a prompt that guides the AI to produce a pro-Israeli narrative. This narrative is crafted to appear conversational and assertive, increasing its credibility and impact.
Step 3: Engaging in Persistent Dialogue
- Bots generate replies within seconds, introducing slight delays to mimic human behavior and enhance their believability. They engage in ongoing conversations, responding persistently to any replies from the original poster. This continuous interaction can effectively drown out human voices and dominate the conversation.
The Impact on Freedom of Expression
The rise of AI-generated content poses a significant threat to freedom of expression. As bots become more sophisticated, they can manipulate online discourse and obscure truthful communication. Jillian York highlights the concern that people’s voices are being drowned out by bots and by state-sponsored propaganda. The sheer volume of AI-generated content can overwhelm human users, making it difficult for genuine voices to be heard.
Holding Tech Companies Accountable
Digital rights activists are increasingly calling for big tech companies take responsibility for the impact of AI-generated content on elections and public discourse. There is a pressing need for these companies to implement measures that protect users from misinformation and manipulation. However, activists face significant challenges in this endeavor, as they often lack the resources and influence to effectively counter the power of large tech corporations.
Conclusion: Navigating the Future of AI and Social Media
The battle of bots on social media highlights the complex interplay between technology, public opinion, and freedom of expression. As AI continues to evolve, distinguishing between human and bot-generated content will become increasingly difficult, posing significant challenges for truthful communication and digital rights. Efforts to hold tech companies accountable and protect genuine human voices are crucial in this rapidly changing landscape.