FROM SPUD TO SILICONE

Photo: Unsplashed

Had you asked me in mid-75 how close to the action I was stationed, I could have answered very precisely: 18 km southeast of the Lebanese border, 15 km west of the Syrian border, and 3 km north of a cafe at a new Israeli settlement called Katzrin, which served the best hot chips in the multiverse. Half a century on, I’m still banging on about chips, but this time the silicone kind.

But front lines are no longer just measured by location or distance as they were back then; they can also be the glass screen of your smartphone. The Gaza conflict has become a primary case study of how artificial intelligence is used to aim narratives as well as missiles. I’m absolutely useless with technical stuff, so please forgive me if I mumble and stumble through this post and get something wrong.

For as long as I can remember, the term, Hasbara, the Hebrew word for explanation, has been used to describe Israel’s strategic communication efforts - i.e. propaganda. But back in those days, it was all done via traditional media like radio, print, TV and the occasional pamphlet drop. Today, Hasbara is supercharged by devastatingly effective and frightfully expensive generative AI, creating a digital environment where the line between organic public opinion and algorithmic manufacture has all but vanished, and helped along ever so nicely by a completely bumbed-down social media audience.

One of the most significant shifts in recent years is the privatisation and scaling of propaganda. Investigative reports from organisations such as Meta, OpenAI, and The New York Times have exposed a sophisticated ecosystem of Tel Aviv-based firms that provide influence-for-hire to government clients. A primary example is the firm Stoic. In 2024, it was revealed that Israel’s Ministry of Diaspora Affairs funded a multi-million-dollar covert operation using Stoic’s AI tools. The campaign used hundreds of fake accounts on X, Facebook, and Instagram to pose as Americans, specifically targeting Black Democratic lawmakers. These AI-generated concerned citizens posted thousands of comments supporting Israel’s military actions, often using large language models to make the text sound authentically human and varied enough to bypass standard bot detection systems.

The scale of these operations has become a major financial and structural commitment. By 2026, the Israeli government’s budget for global digital influence reached unprecedented levels, with specialised contracts such as a six-million-dollar deal with consultants to steer conversational outputs on AI platforms like ChatGPT and target younger audiences on TikTok. While the exact number of operatives remains obscured by private contractors, the infrastructure involves state agencies like Lapam, which has sponsored thousands of targeted ads in short windows to dominate international feeds.

To stay ahead of platform moderators, these AI swarms have moved beyond simple automation to sophisticated evasion techniques. Modern operations use vibe-coding and human-mimicry prompts, instructing AI to post with intentional typos, slang, and irregular timing like a real person (and here I am paying Grammarly $44 a month to avoid all that). This makes it nearly impossible for algorithms to distinguish a bot from a genuine user. Operatives also use metadata stripping to remove the digital fingerprint left by AI tools in images and videos. By cleaning technical data and routing traffic through North American proxy servers, these campaigns pose as local grassroots movements, a tactic known as astroturfing. This allows them to bypass the AI Info labels and security filters that tech companies claim protect the public.

As the conflict expanded into 2025, the use of synthetic media reached a fever pitch. During heightened tensions between Israel and Iran, AI-generated imagery became a central weapon. One incident involved the circulation of high-quality AI videos that appeared to show precision strikes on the Evin prison in Tehran. The goal was to stoke domestic unrest in Iran by making it seem as though the gates were being liberated by external forces. While the footage was later proven synthetic, it had already been picked up by several mainstream news outlets, showing how AI can create facts on the ground before truth-checkers can even respond.

In addition to deepfakes, AI is highly effective at sentiment analysis and micro-targeting. By processing large volumes of social media data, AI can identify demographics most vulnerable to specific messages. This supports the ‘firehose of falsehood’ strategy, where AI generates large amounts of content across platforms. The aim is not necessarily to convince people of a particular falsehood, but to overwhelm them with conflicting information, eroding belief in objective truth. This digital fatigue enables state narratives to gain traction among audiences too exhausted to verify every claim.

It is impossible to separate Israel’s use of AI in propaganda from its use in the field. Systems like The Gospel and Lavender have been used to identify thousands of targets in Gaza. However, these systems also serve a secondary propaganda purpose. The Israeli military often frames these tools as surgical and objective, using the prestige of AI to project an image of clinical precision that minimises human error. Critics and human rights organisations argue this is a form of algorithmic washing, using the perceived neutrality of math to mask high civilian casualty rates and provide a veneer of legality to mass-scale destruction. By branding the war as high-tech and automated, the state attempts to shield itself from the moral outcry typically associated with conventional urban warfare.

The danger of AI-driven propaganda lies in its unprecedented scalability. In previous wars, a propaganda office required hundreds of writers and editors to maintain a global narrative. Today, a single operative with a well-tuned model can produce the output of a professional newsroom in a fraction of the time. Furthermore, the plausible deniability provided by private contractors makes accountability nearly impossible. When a campaign is caught, the government can claim it was an independent firm acting on its own; when the firm is caught, it can claim it was just providing standard marketing services.

It’s clear that the information war has entered a permanent state of automation. The Israeli model of combining state-funded goals with private-sector AI innovation has set a template that other nations are already following. For the average user, the takeaway is sobering: in a world of weaponised AI, your outrage is a metric, and your empathy is a target. The most effective defence is no longer just better algorithms, but a relentless, manual scepticism of everything that appears on your feed.

And just as an afterthought, everything that took me 12 hours to achieve during my normal day’s work as a logistics officer back then could likely be done in minutes by Grok today.

Previous
Previous

RESPECT AND EMPATHY

Next
Next

SHIFTING THE FOCUS