Digital Deception and Climate Security: The Influence of Bots, Generative AI, and Big Tech on Climate Narratives
- Andjelija Kedzic
- Jun 6
- 7 min read
Updated: Jun 19

The rising temperatures, sea levels, extreme weather events, and climate change-induced displacement, along with the rising conflicts fueled by food shortages, water scarcity, and resource competition, reflect the environmental degradation impacting our planet, posing significant challenges to global security. While just climate action needs to happen now, our climate discussions are influenced by different actors, who seek to reshape and frame the narratives, serving their own incentives.
With nearly a quarter of the global population reporting encountering false narratives concerning climate change or the environment, addressing one of the most prominent tools used by different actors in automated influence—automated bots, becomes critical. These hidden actors who weave through digital spaces are often deployed by different public or private actors with vested interests in sowing distrust in climate science, discrediting energy alternatives, and amplifying climate disinformation, malinformation and conspiracy theories.
In the simplest definition, as researchers suggest, bots are automated accounts on social media that like, comment, or spread content that is initially generated by a user, performing repetitive tasks, often posting up to 50 posts per day. They do not sleep, have unlimited time, do not feel, yet can affect your thought processes, even your feelings, as they can appear legitimate to you as a human user. They are cheap, yet powerful tools in the hands of actors who seek to reframe climate discussions in alignment with their incentives. While their influence can seem trivial at first glance, their influence is anything but, as citizens struggle to distinguish between human and bot-generated text, allowing for influence that is subtle yet significant (Edwards et al., 2014; Everett et al., 2016; Wang et al., 2018).
The lack of adequate institutional checks at the national and transnational levels in the realms of semi-regulated or self-regulated mechanisms of social media platforms, puts citizens' right to informed decision-making at risk, exposing them to various influences on both local and global levels, such as coordinated automated influence (e.g., bots). This poses critical risks to the climate change crisis that threatens the basic security of citizens worldwide (e.g., forced displacements, food shortages, water scarcity), with climate disinformation emerging as a profound threat.
With its potential to affect opinions and decision-making processes, virtual efforts could have very real consequences. Private and public actors can create false consensus surrounding climate denial by deploying an army of hundreds of bots to spread climate denialist or misleading messages, as bots appear legitimate to real users. This, in turn, can convince you that climate change is not a real threat, divert your attention away from the actors most responsible for the climate crisis, as, due to the increased inability to deny climate change, different actors also use bots to reframe climate change narratives in a way that aligns with their incentives. Ultimately, this can increase the risk that citizens make choices that prioritize fossil fuel interests over their own health, their future, and the well-being of the planet.
Building on these tactics of digital manipulation and utilization of bots to spread misleading and false narratives surrounding climate change, the rapid rise of generative AI tools that create synthetic content nearly indistinguishable from real human output, when fused with the utilization of bots, takes digital climate deception to new levels. These tools make it cheaper, faster, and easier to produce misleading climate narratives targeted at specific audiences. For instance, AI-generated videos can feature fake “experts” denying climate science or mimic important reputable persons denying climate science, fabricate news articles that aim to sow doubt about climate policies, AI chatbots can engage users in conversations designed to persuade them in trusting the narratives and climate solutions promoted by the actors driving the influence campaigns, while AI tools can produce more persuasive messaging, which bots could further disseminate.
Important strength of generative AI models lies in their ability to automate varied and personalized messages, which easily can go undetected by current bot detection tools. This allows small groups to appear much larger online than they are. Studies also show that AI-generated messages can be as persuasive as human ones, and in some cases, are perceived as even more persuasive (i.e., more factual and logical), particularly on polarized policy issues, while individuals often fail to distinguish between AI- and human-generated text (Bai et al., 2023; Salvi et al., 2024).
Big Tech’s Role
Given the growing power of automated-driven climate disinformation, Big Tech platforms play a crucial role amid one of humanity’s most pressing challenges: the climate crisis. Platforms like X (formerly Twitter), Facebook, and others have become agoras where battles for truth and influence play out. Yet, while these companies should curb disinformation, the very systems that allow disinformation to spread are also engines of profit. Perhaps, as study suggests that the fake news spreads faster than true news, this drives the surge in user engagement. Increased engagement leads to more ads being seen, amplifying their exposure to users, which in turn boosts profits for Big Tech companies. This, as logical as it could be, raises concerns about whether Big Tech has strong incentives to deter effectively the spread of disinformation, as these platforms remain institutionally unchecked at large.
This concern deepens even further when recalling that Big Tech companies, such as Meta, have profited from ads run by networks of fake accounts. Between July 2018 and April 2022, Meta made at least $30.3 million in ad revenue from networks of fake accounts it later removed for engaging in coordinated inauthentic behavior (CIB). Although these accounts were taken down, Meta never returned the ad revenue that it generated from these networks, as confirmed by its head of security communications Margarita Franklin. On the other hand, CAAD investigation uncovered that fossil fuel-linked groups spent around 4 million dollars on Facebook and Instagram ads that spread false climate claims over the COP27 summit, while Big Oil companies Shell, Chevron, and Exxon all ran ads in the run-up to the summit.
Despite these dynamics, content moderation remains primarily in the hands of self-regulating Big Tech mechanisms in which market-based solutions culture is nurtured, while the rule of law in which citizens have more options to participate is rather at risk. While the EU introduced the Digital Services Act (DSA) in 2023, which mandates compliance for 17 Very Large Online Platforms (VLOPs) like X, TikTok, Instagram, Facebook, Amazon, and Apple App Store, as well as 2 Very Large Online Search Engines (VLOSEs), Google Search and Bing that reach 45 million monthly active users most major online platforms have not yet complied. The DSA grants the EU authority to impose fines of up to 6% of a company’s global revenue, with repeated offenses potentially leading to a ban on the company doing business within the EU, however the U.S. Senate Committee on Finance criticized the DSA, calling the EU’s regulatory focus on American companies “discriminatory” and urged President Biden to “fight” against EU digital trade policies, clearly overlooking the fact that all major tech companies are US-based.
For the time being, until Big Tech faces proper regulation globally, our digital arenas seem poised to continue the trend of showcasing how private entities can operate globally, with the potential to influence the very political realms within our societies, all while remaining largely unregulated by institutions. As such, these dynamics further contribute to the increased vulnerability of citizens' rights, obstructing their ability to make informed decisions as various actors spread misleading climate information and disinformation. An example of this is the influence of bots in shaping public opinion on climate issues related to COP28, COP29, and the Paris Agreement.
The Influence of Bots in Climate Discourses: COP28 & COP29
The influence of bots in shaping public opinion on climate issues has been well-documented. In 2017, research revealed that bots accounted for 9% to 15% of active Twitter users (now X). By 2020, their influence had grown further. Analysis by Brown University revealed that 25% of all tweets about the climate crisis during the period when Donald Trump announced the U.S. withdrawal from the Paris Agreement were generated by bots. These bots overwhelmingly supported anti-climate policies, spreading disinformation and cheering Trump’s decision. Bot activity was especially high in topics such as “fake science” (38%) and discussions about petroleum giant Exxon (28%). Similarly, investigations during COP28 and COP29 exposed coordinated bot networks amplifying pro-petrostate narratives and suppressing criticism.
In 2024, a Global Witness (2024) investigation identified 71 suspicious, coordinated accounts on X (formerly Twitter). Most of these accounts were created after May and featured similar nature-related banners or profile images (e.g., flowers or trees), with some even using identical images. These accounts worked to promote Azerbaijan’s hosting of the COP29 climate summit, suppressing criticism of the country’s poor climate record and human rights abuses. Over half of their September posts used the hashtags #COP29 or #COP29Azerbaijan, while 70% of their retweets amplified official Azerbaijan COP29 or government-related content. Another 111 accounts posted similar narratives, yet they differed in not having nature-themed imagery.
Similar bot activity was observed during COP28 in 2023, hosted by another petrostate, the UAE. Marc Owen Jones, author of Digital Authoritarianism in the Middle East, uncovered a network of 100 fake accounts and 30,000 tweets, which aimed to defend the UAE’s hosting of COP28 and the appointment of UAE oil company CEO Sultan al-Jaber as president, per CBC. These accounts displayed enhanced sophistication by tagging legitimate, credible, and well-respected organizations, particularly humanitarian ones such as Amnesty International and Human Rights Watch, in their social media bios to create an impression of legitimacy for real human users.
These examples reflect how bots can amplify the interests of fossil fuel entities, aiming to subtly instill into the discussions false public consensus for certain narratives, distorting public perception, and having potential to obstruct meaningful climate action. By shaping public perception worldwide and polluting information spaces, social media bots make it increasingly difficult for human users to differentiate between credible sources, malinformation, and disinformation.
Promoting Digital and Media Literacy
Given that many citizens struggle to identify bots and often lack the context or understanding of how extensively they are used by various public and private actors, the first step in combating climate disinformation is to equip yourself with digital and media literacy tools, and learn about the documented use of bots in these efforts. From there, policymakers must advocate for and demand platforms to be more transparent, while holding them accountable for illegal practices. Banning social media platforms, protectionist measures and quick fixes should be avoided, as they could represent direct censorship. Due to the global interconnectivity that social media platforms offer, there is a need to foster global cooperation to combat the misuse of AI and bot networks to protect the integrity of public discourse surrounding climate issues, which are, in essence, global issues, craving globally shared understanding and a shared vision for a way forward.
Comments