Tracking Adversaries and First Responding to Disinfo Ops: The Evolution of Deception and Manipulation Tactics on Gulf Twitter

Marc Owen Jones, Hamad Bin Khalifa University

Authoritarian regimes and other ‘bad’ actors in the Middle East are using social media for large scale deception operations. With little transparency from tech companies and poor regulation around disinformation, monitoring and tracking those operations falls uncomfortably upon journalists, activists and academics.[1] It is therefore necessary to share and discuss emerging techniques of identifying deception with academics across disciplines. It is also important to be transparent about detection methods in an environment where the terms ‘bot’ and ‘troll’ are frequently deployed against those who have opposing views. Being clear about methods of identifying deception can be instructive in a number of ways. Without identifying and acknowledging such deception, sociological studies of social media will inevitably be plagued with ‘corrupted’ data. Scholars using social media data must be adept at filtering out such deception.

Although studies of fake news and disinformation have usually focused on content, it is important to include the means of distribution (e.g. bots or sock puppets) of that content. For this reason, deception is the preferred term: ‘Deception is the wilful manipulation of the information space through erroneous content and manipulated forms of distribution, with the intent to cause some form of harm through demonisation, adversarial content, omission, misdirection, whitewashing, or influencing information availability in the service of political power maintenance.’[2] Deception can often involve co-ordinated and inauthentic behaviour, which is why Facebook tends to label it CIB (Co-ordinated inauthentic behaviour).

Examining deception has other benefits. It can provide insights into social media governance and policy. Techniques of analysis can help us find fingerprints of certain deception actors, or in some cases, the entity behind such operations.  With this in mind, this paper documents some of the evolving ways of identifying certain types of Twitter influence operations in order to highlight the scale, resilience and diversity of the phenomenon – particularly those involving sock puppets (a social media account purporting to be one person but operated by someone else) and bots (an automated account that posts content according to a computer script). In doing so, it also highlights a number of case studies that reflect the dominant tropes of MENA-focused influence operations.

Government, Big Tech and the Opaque Deception Assemblage

Governments around the globe, and particularly authoritarian regimes, have sought to use social media to manipulate domestic and international publics. As others in this volume have pointed out, the MENA region is rarely the object of focus when it comes to social media disinformation. Akin Unver notes, for example, an emphasis on USA-focused research.[3] In addition, Russia and China get a great deal of attention, partly reflecting the dominance of transatlantic security concerns in English-language scholarship. Despite this, countries in the MENA are some of the most active targets and perpetrators of influence operations and deception.   This is especially true when it comes to the micro-blogging site Twitter.

Twitter has since 2018 publicized attempts to take down state-backed information operations (see DiResta, Goldstein and Grossman show in in this collection for discussion of Facebook takedowns).[4] Twitter takedowns are when Twitter removes accounts believed to be connected to state-backed influence operations. Since this time, of all the takedowns published, the bloc forming KSA, UAE and Egypt represent  the second biggest actor in terms of number of accounts taken down. They are the most prolific global abusers of Twitter behind Russia and China.[5]  Iran also features in the top ten of Twitter information operation takedowns.

Technology companies such as Twitter should not be viewed as separate from the assemblage of forces that allow for the perpetuation and execution of influence operations. Social media companies have long been accused of neglecting marginalised communities in the developing world or siding with the authorities in human rights abusing states when it comes to content moderation and the censorship of resistance struggle.  Ahmed Shaheed and Benjamin Greenacre discuss in further detail in this volume the accusation that Facebook have been accused of complying all too freely with Israeli requests to take down Palestinian content,[6] and more broadly of gravitating toward ‘the richer, more powerful, and better-organized side.’[7] In the context of the Gulf, both Facebook and Twitter have been criticised for having their MENA operations stationed in the UAE. This has invited similar accusations of bias as well as concerns about human rights.[8] I made Twitter aware of regional manipulation as early as 2016,[9] when I provided a list of several thousand bot accounts promoting anti-Shia hate speech in Arabic. But the problem has persisted until at least 2020.[10]

Despite its move towards some transparency, Twitter’s published data is a poor indicator of actual influence operations. Many potential state-backed information operations remain unregistered, so that the absence of data on Twitter’s official archives does not mean other suspended accounts are not connected to a particular state actor. As a case in point, in 2020, Twitter released data for several accounts connected to a Saudi backed information operation against Qatar. However, the scale of the operation at the time was much bigger.[11] Thousands of suspicious accounts were detected by myself, and later suspended by Twitter, but were not released as part of the state-backed archives.[12] Similarly, while it is tempting to assume that political disinformation operations are primarily the domain of the state, multiple actors can be involved in this process, from individual ‘hackers’, to digital marketing firms. As is explored in this volume by Alexei Abrahams and Andrew Leber, influence operations can be conducted by multiple actors, from state to private individuals, and efforts need to be made to expand research ‘beyond top-down efforts.’[13] While researchers of course hope to find so-called ‘puppet-masters’,[14] unambiguous attribution can be difficult, with limited opportunities for a smoking gun. Identifying deception is not the same as defining attribution. Crucially, though, deceptive influence operations and manipulated data can be identified without decisive attribution.

The first priority of disinformation is detection. Understanding the methods of manipulation are key to identifying influence operations. Without Twitter being fully transparent about what factors exactly lead them to determine what counts as state-backed information operations, academics, analysts and journalists have to be adept at reverse engineering or creating other means of determining suspicious activity in order to rapidly detect manipulation. What I call first responding to manipulation is imperative for achieving timely disruption of propaganda and disinformation. The longer such content lingers, the more potential it has for circulation and adversely impacting the information ecosystem.

The following questions are therefore key: What computational and non-computational techniques are being used to manipulate Twitter and promote authoritarian propaganda in the Persian Gulf? How is computational and-non computational manipulation evolving in the Persian Gulf and how can we detect it?  How is it developing in terms of its ability to evade spam detection algorithms? What can such research tell us about Twitter governance in MENA? Is Twitter really doing enough to combat computational propaganda? What are some examples of trends manipulated using these methods? How can social media be made useful as a means of studying public opinion given all this manipulation?

Indicators of deception operations on Twitter

A common tactic of deception is the use of thousands of bots (automated Twitter accounts usually simulating real people) to promote specific propaganda or disinformation. Deception here involves the content itself, but also creating the illusion of false consensus and imaginary publics (astroturfing). Twitter bots can be considered anomalies compared to organic Twitter activity. Since 2016, a useful method for anomaly detection has been looking at unusual spikes in account creation. Here, researchers can examine hashtags to see whether there are groups of accounts that are disproportionately created within a temporally limited time frame.[15] The logic here is that if hundreds or thousands of accounts are set up in a short time frame, and are also tweeting on the same topic, it can be said that there is a strong probability they were created solely for the purpose of platform manipulation. It used to be more common in the MENA region for bots to just copy and paste identical content across thousands of accounts. This seems less common now, but despite Twitter taking action to combat it,[16] this so-called ‘copy-pasta’  has not been wiped out completely.

On Gulf Twitter, at least, such indicators are perhaps less prevalent now than they were between 2016 and 2019. However, this does not mean inauthentic account creation is still not a relevant metric. As an example, in November 2019, thousands of accounts tweeting pro-Saudi propaganda around the Riyadh Agreement #اتفاق_الرياض  were clearly created in a narrow time frame for the purpose of platform manipulation.[17] Many of the accounts engaged in this type of deception are connected to the Saudi-based news channel Saudi 24, which has been a prolific platform manipulator over the years.[18] This type of manipulation has been endemic. It has been used boost Donald Trump’s anti-Iranian and pro-Saudi tweets, promote criticism of Qatar, spread anti-Shia hate speech, and promote praise of Mohammed bin Salman – to name but a few examples.[19]

Sometimes the sequence of a trend and its initiation, rather than account creation date, can be useful in detecting deception.  As an example, in November 2019, the trend السكران_يدعو_السكارى_للاراده (The drunk calls the drunks to the square) began to trend in Kuwait.  The ‘drunk’ is a reference to former Kuwaiti MP Saleh al Mulla and ‘drunks’ to those who support him. The trend appeared to be an attempt to deter people going to the square to protest. In this analysis, a number of other anomalies, including the application (e.g. was the Tweet sent from an iPhone or Android for example) used to send the Tweet, were arguably more useful than account creation in determining deceptive activity. Most strikingly, all of the accounts that seeded the hashtag used the same application – Tweetdeck; a highly unlikely organic scenario. This then prompted what seemed to be organic take up from real Kuwaiti Twitter users. A probable sock puppet farm then intervened; here accounts using the application Mobile Web (M2) app to exclusively retweeted a single account that was critical of Saleh al Mullasprung into action. Overall, low app diversity (a low number of unique applications) is an important signature, especially if those starting a hashtag appear to be using just one application (This may become less useful as a method of detection as Twitter has, over time, limited the existence of bespoke applications through stricter API access measures). It is also a good example of how a manufactured trend can then provoke genuine public discussion on a topic.

Figure 1 Tweets showing the hashtag ‘The drunk calls the drunks to the square’. Yellow denotes tweets sent using Tweetdeck, Orange denotes M2

This problematises the normative notion that the volume of bots is the issue. Indeed, if a small number of bots, trolls or influence ‘operators’ can successfully initiate and shape public discussion, then sheer volume of accounts is not always the most important thing in defining the narrative. This is highlighted well in Andrew Leber and Alexei Abraham’s study of elite-driven narratives in the Gulf Crisis.[20]

Network analysis, a graph-based form of analysis showing the relationships between communities of accounts, can also be a useful means of detecting deception.  In May 2020, Yemeni Nobel Prize winner Tawwokal Karman was appointed to Facebook’s Oversight Council, a body created by Facebook to have the final say on key content moderation decisions.[21] Karman was perceived as a threat to some Gulf countries for her alleged support of the Muslim Brotherhood, and had been targeted with Karma spyware from the UAE.[22]  Following the announcement from Facebook, online campaigns soon began to smear Karman, accusing her of being a terrorist, working for Turkey, or being an agent of the Muslim Brotherhood. English and Arabic hashtags trended, including “#no to Facebook Caliphate”.

Network analysis of the hashtag ‘no to Facebook Caliphate’ demonstrated significant suspicious activity. The density of interactions, and the communities formed by interactions can indicate potential authentic versus inauthentic elements to those communities. In addition to creation date anomalies as mentioned before, the suspicious accounts formed distinct communities separate from denser clusters of communities. This relative isolation signified a lack of organic engagement with other users of the hashtag, which is itself unusual.  The intra-community interaction between the distinct and separate community was high though, meaning these suspicious communities interacted a lot with one another, but not others – presumably to boost engagement and thus perceived popularity of the trend.

Figure 2 Network graph of no to Facebook Caliphate, showing suspicious community in bottom left quadrant

It was also clear that the already suspected community was sending tweets using the same Twitter application too. Many of the accounts were also created in a narrow time window. This series of compounding anomalies such as distinct modularity (community), low app diversity, and low creation date diversity indicate with even more certainty the existence of inauthentic and co-ordinated behaviour. The compounding anomalies also make it seem stranger that Twitter’s algorithms did not flag the accounts quickly. (In Figure 2, the blue accounts in the lower left quadrant indicate the separation of the suspicious community from the denser collection of accounts).

Chopped hashtags

An emerging trend in 2021 has been the use of ‘chopped hashtags’. Here, sockpuppet accounts dilute and pollute critical hashtags using abbreviated versions of the real hashtags.[23] So instead of, for example, #ماذا_استفدنا_من_الرؤية  (what do we benefit from Saudi Vision 2030) sock puppets would deploy the hashtag #ماذا_ (what_). The impact seems to be that the chopped hashtags trend more readily than the ones they seek to replace. This tactic has been used multiple times to dilute trends critical of Mohammed bin Salman.  This tactic became increasingly common and successful after the CIA released a report re-affirming the role of MBS in the murder of Washington Post journalist Jamal Khashoggi.[24] A similar tactic to this involves misspellings, such as khasxoggi instead of khashoggi.[25] It is thought this tactic exploits Twitter’s trending algorithmic preference for novelty. (It is also worth noting activists may make use of misspellings in order to keep a topic trending.)

The dozens of instances of chopped hashtags since January 2021 generally share a common trait, one that likens them to some bot networks in some ways, but differentiates them in others. Low centrality measures (how many times an account interacts with others or communicates) have become more useful than account creation date or intra-community interaction. That is to say, most of the accounts, when analysed, were not interacting with anyone but using the same suspicious hashtag seemingly on their own independent volition. The likelihood of multiple accounts independently deploying the same misspelled hashtag without some form of co-ordination is low. That many used the same application to tweet also created another compounding anomaly. Again, it is not clear who operates these accounts, but there is a wealth of accounts in Saudi that advertise paid trending services. Some even advertise services to get rid of ‘unpatriotic trends’, although they do not publicise their methods.[26] It has also been confirmed by a BBC investigation that unofficial Saudi-focused paid-for trending services work.[27]

Perpetrators: Not just state actors

Tracking manipulation, particularly when it involves anonymous accounts and bots, is also compounded by the difficulty in tracking down the perpetrators, who have different techniques of manipulation at their disposal. Where finding the perpetrators has been successful, it is clear that the number of actors involved highlights a diverse array of manipulators, each of whom could be selling their services or products to multiple clients – state or otherwise. In addition, it would make sense for state actors engaging in deception to potentially outsource projects in such a way as to remove obvious links to the state for the purposes of plausible deniability or operational security. In authoritarian regimes, the distance between state-linked accounts and private entities can be nebulous given the depth of autocratic control over multiple parts of the economy. For example, in Saudi Arabia, Twitter removed at least 88000 accounts connected to a digital marketing firm called SMAAT. One of their projects issued at 48 hours’ notice was to cover the Riyadh Summit in 5 different languages.[28] Such a high-profile event is likely to involve some form of co-ordination with a government entity.

Where Twitter fails to offer clarity as to the provenance of certain manipulation operations, OSINT (Open Source Intelligence) and investigative work can yield results. In one instance, myself and Bill Marczak from Citizenlab managed to track down an Egyptian sock puppet operator through tracking metadata breadcrumbs. The manipulator eventually admitted to having created thousands of accounts that he managed through a software he created called Diavolo (Devil in Italian). They were mostly used to promote content for the channel Saudi 24 and its sister channels.[29] He later sold those accounts.[30] Similarly, by reverse searching a phone number included in an attempt to sell 5000 sock puppets to a Saudi-based individual via Twitter, I located a series of instructional videos for sockpuppet management software. The accounts that were run using this software would often include a tell-tale signature of a random string of alphanumeric characters, themselves designed (according to the creator) to fool Twitter’s algorithm into finding the tweet unique and therefore not suspicious.[31] Accounts using this distinctive signature have been deployed on a number of networks, including one promoting ISIL propaganda to those tweeting on Saudi domestic politics.[32] Indeed, these random strings can also be useful indicators of sockpuppet activity. In both cases, there was no apparent limit to the number of accounts that could be operated by the software.

Beyond Bots

Bot has become a catchall term to describe a bad faith interlocutor, regardless of their authenticity. This is not necessarily a bad thing, as people tend to adopt such vocabularies as shorthand for general perceptions of manipulation. However, the ubiquity of the term should not conceal that there are many different things happening under the label of bot.[33] Deception operations go beyond sock puppets and bots.  Co-ordinated tweeting (whether propaganda or disinformation) from a core network of influential accounts that then generates organic activity is also an increasingly common form of manipulating and controlling online discussions. A number of incidents in 2020, including viral rumours of a fake coup d’etat in Qatar,[34] and the doxing of Al Jazeera journalist Ghada Oueiss with hacked personal photos, highlight how stories may be planted or orchestrated on social media, and then picked up by legacy media.  Here influencers, sock puppets and bots generate buzz, with organic accounts and real media picking up the story. Because many of those involved are ‘real’ people, traditional markers of deception may not readily apply. However, such behaviour can still be considered co-ordinated manipulation, even if led by ‘real’ people with groups of sock-puppets and bots playing attendant and complimentary rolls.  Tracking such campaigns requires significant contextual knowledge, ethnographic know how, and a broad array of digital tools.

Reading between the lines

What can the evolution of methods for detecting deception tell us about Twitter manipulation and indeed Twitter governance in MENA? There appears to be an evolution in the general techniques of computational and non-computational propaganda designed to evade detection by Twitter. A striking finding is the relative crudeness of such operations. It would be difficult to call many of the methods particularly sophisticated. The fact humans can identify such manipulation, call it out, only for it to continue, raises serious question about Twitter’s will or competence in tackling it. That certain overt forms of crude manipulation can last for years is also a troubling reflection of Twitter’s policy in the MENA region. The fact that Twitter CEO Jack Dorsey has met MBS twice, even after Twitter was compromised by spies acting on behalf of well-connected Saudi entities, has made observers suspicious about whether such tolerance is intentional. Similarly, while Twitter bans political advertising, it recently became apparent that advertising to express loyalty to political figures (including in authoritarian states) is permitted in Twitter’s Terms and Conditions.[35] Without auditing and transparency, scholars should not place too much trust in social media companies’ ability or intention to remain neutral arbiters of information wars. Certainly it would be remiss to rely on their data releases as a comprehensive source of manipulated content.

As social media becomes an important source and object of study, detecting social media deception is becoming an increasingly important skill for journalists, academics and analysts. Previous manipulations may have gone undetected if only one form of anomaly detection was applied. The potential lag in identifying deceptive content invites continued scrutiny and re-assessment of past scholarship. Many social media studies might be considered provisional, as the data used for such analysis might later turn out to be corrupted.

Although bots, and computational propaganda are important, it is important to see them as one component in a broader tapestry of deception. Trolls, bots, co-opted influences, all form part of a milieu that seeks to crowd oppositional voices out of the information space. Examining these holistically is perhaps more fruitful, even if much more challenging than anomaly detection. Certainly, the extent of deception is enough to warrant a critical re-evaluation of how we approach social media analysis. Indeed, we may be certain of what is false, but we cannot be certain of what is real.

Influence operations are increasingly more sophisticated, and locating them should not be reduced to counting the number of bots in a sample. We need more studies of deception (whether by bots or not) and influence operations to determine whether or not social media data can truly reflect authentic public opinion. The overarching ontology should be not to assume that social media is organic behaviour sullied by bots, but that no social media behaviour in the realm of MENA politics should be assumed to be organic.



[1] Claire Wardle. “10 questions to ask before covering misinformation”. First Draft. 2017,

[2] Marc Owen Jones, Digital Authoritarianism, Deception, Disinformation and Social Media. (London: Hurst/OUP, 2021) (forthcoming)

[3] See Unver, A. Russian Digital Influence Operations in Turkey (this volume)

[4] Information Operations – Twitter Transparency Center. (2018).

[5] Marc Owen Jones. “Thought Russia was bad? Why Saudi Arabia is the world’s most dangerous cyber bully”. The New Arab. 2020,

[6] Ahmed Shaheen and Benjamin Greenacre, Binary Threat: How Governments’ Cyber Laws and Practice Undermine Human Rights in the MENA Region (this volume)

[7] Billy Perrigo. Inside Facebook’s Meeting With Palestinian Officials Over Posts Inaccurately Flagged as Incitement to Violence. Time. 21 May 2021.‌

[8] Marc Owen Jones  “The Gulf Information War| Propaganda, Fake News, and Fake Trends: The Weaponization of Twitter Bots in the Gulf Crisis”. International Journal Of Communication, 13, 27 (2019). Retrieved from

[9] The author communicated with Twitter about several thousand Twitter accounts spreading sectarian hate speech. Twitter suspended around 1600 accounts, all of which showed the hall marks of temporally limited account creation

[10] Marc Owen Jones, “Automated sectarianism and pro-Saudi propaganda on Twitter”. Tactical Tech, (2016)

[11] Marc Owen Jones Anatomy of a disinformation campaign: The coup that never was.; Al Jazeera. (2020, May 19).

[12] Marc Owen Jones (2020), May 16, Twitter Thread,

[13] Alexei Abrahams and Andrew Leber (2021) Social media manipulation in the MENA:

Inauthenticity, Inequality, and Insecurity (in this volume)

[14] Emilio Ferrara, Onur Varol, Clayton Davis, Filippo Menczer, Alessandro Flammini

Communications of the ACM, Vol. 59 No. 7 (July 2016,) Pages 96-104


[15]  See Jones “The Gulf Information War| Propaganda, Fake News, and Fake Trends:

[16] Twitter Comms, (2020), August, Status Update,


[18] Jones, “Automated sectarianism…”

[19] Jones, Digital Authoritarianism, 2021 (forthcoming)

[20] Andrew Leber and Alexei Abrahams “A Storm of Tweets: Social Media Manipulation During the Gulf Crisis”. Review of Middle East Studies, 53(2), 241-258. doi:10.1017/rms.2019.45

[21] The New Arab. “Facebook appoints Yemeni Nobel laureate Tawakkol Karman as oversight board member”. The New Arab. (2020).

[22] Joel Schectman and Christopher Bing Exclusive: UAE used cyber super-weapon to spy on iPhones of foes. U.S.  (2019, January 30).

[23] Marc Owen Jones, “Profit for Propaganda: Twitter Still Complicit in Whitewashing the Murder of Jamal Khashoggi” – Democracy for the Arab World Now. (2021, March 8). DAWN.

[24] See for example

[25] It is possible to that slightly misspelled hashtags are also used by activists to keep a topic trending. This is because’s Twitter’s algorithm rewards novelty, and hashtags will cease to trend as easily once they become commonplace.

[26] Marc Owen Jones (2021). March 23, Twitter Update,

[27] By BBC Trending. (2018, March 2). How much to fake a trend on Twitter? In one country, about £150. BBC News; BBC News.

[28] Smaat Co. LLC. (2018). Website,

[29] Marc Owen Jones (2019). Oct 9 Twitter Thread,

[30] Yarno Ritzen. (2019, July 15). “How armies of fake accounts ‘ruined’ Twitter in the Middle East”.; Al Jazeera.

[31] Jones, Digital Authoritarianism….

[32] Marc Owen Jones Nov 2019, Twitter Status Update,

[33] Andrew Leber and Alexei Abrahams ”Saudi Twitter blew up with support for the crown prince. How much of it is genuine?” Washington Post; The Washington Post. (2021, March 9).

[34] Jones, “Anatomy of a disinformation campaign…”

[35] Jones, “Profit for Propaganda…”