top of page

Fake Media Outlets in the Age of AI: Emergence, Impact, and Implications

  • Writer: Raluca Sabau
    Raluca Sabau
  • Jun 6
  • 5 min read

Updated: Jun 19


The boom in generative AI has divided people. They either hate it, some saying it evokes feelings of the uncanny valley when looking at generated images of celebrities or animals and those who love it and started creating careers out of generating images or videos. But the boom in AI has created bigger issues in the realm of journalism as generative AI utilized by different private and public actors has significantly increased the volume of disinformation, as being used as a tool to create texts for the fake media outlets that spread disinformation or misleading narratives. In 2023, NewsGuard identified 49 websites that appeared to be mostly or entirely generated using AI language models while suggesting they are being operated by legitimate publishers. The identified sites were published in seven languages and covered topics such as politics, health and finance. However, this only scratches the surface of the power of using AI and fake media outlets and the damage they can cause. The implications of generative AI on creating disinformation are profound as they can create challenges, such as it is becoming harder to distinguish what is generated and what is real, and also create ways to mitigate the damage caused by the spread of disinformation. So, how can we identify fake media outlets and understand how they function and what their purpose is? 

 

Firstly, what defines a fake media outlet, and how can we identify one? A fake media outlet can be defined as a platform or website that spreads information that is intentionally misleading, i.e., fake news, and is often imitating real news, providing distorted versions of actual events and acting as a legitimate news source (Idiongo, 2024). Fake media outlets are often associated with misleading information, as the purpose of fake news is to create misleading statements or online disinformation to manipulate the public. This is done to generate attention from readers as well as to get more clicks and shares, therefore gaining revenue or ideological gain (Baptista and Gradim, 2022). However, since the Internet is so vast and ever-expanding it is becoming harder and harder to keep track of what platforms are publishing factual information or publishing disinformation. One of the main ways that fake media outlets generate clicks and views is their dependence on sensationalism and emotional appeal to generate interest. The reasoning behind starting a fake media outlet can vary, but most of the time they are driven by financial incentives such as through ad revenue by creating exaggerated and sensational news such as in the lead-up to the 2016 US presidential election some publishers of hoax political news made between 10K and 30K USD per month (Braun & Eklund, 2019). A report by the ISD (Institute for Strategic Dialogue) found that a network of Facebook groups and pages based out of Vietnam that was sharing pro-Trump and right wing messages were earning around 1800 USD through advertising alone (Robins-Early, 2022) Of course, there are many outlets whose purpose is more ideological such as promoting certain agendas or dismantling political opponents (Farhall, Carson, Wright, Gibbons and Lukamto, 2019). 

 

While the existence of fake media outlets is harmful enough, it could be argued that the rise of generative AI technologies has shifted the landscape of disinformation. Fake media outlets can utilize these technologies to produce convincing images or videos and even voices of people. Furthermore, generative AI technologies are becoming more accessible as some offer a less advanced version for free, while more detailed and advanced AI require a subscription service and can only generate a certain number of prompts a day. Examples of AI technologies are Midjourney and DALL-E 3, which both create images, and ElevenLabs, which has a library of voices, including real people, and can generate people saying certain phrases or words.  For example, Menz et al. (2023) note the implications of AI-generated news concerning public health during the pandemic, which “fostered confusion, panic, and mistrust”.  Kreps et al. (2020) indicated that individuals can find it difficult to distinguish between AI and human-generated news stories, but partisanship can affect how credible the story is to the individual. Respondents in this particular study did note that the AI-generated news articles included contradictions in the story or grammatical errors but most respondents agreed that generated news content appears authentic and reliable. Research like Kreps et al. (2020) highlights how easily AI-generated content can be passed off as reliable information and how it can affect certain political groups more than others. 

 

The usage of bots has also been a part in the spread of disinformation online. Bots are automated social media accounts that have been programmed to interact with content and other users but they can also be used to spread disinformation online (Ferrara, Varol, David, Menczer and Flamini, 2016) but more importantly, these bots have been shown to create spikes around specific topics which could cause public opinions to become polarized (Ferrara, Chang, Chen, Muric & Patel, 2020).  A study by Stella et al. (2018) during the Catalan referendum in 2017 analysed 4 million Twitter posts that were generated by 1 million issues and showed that bots can create exposure on social media by “accentuating exposure to negative, hatred-inspiring, inflammatory content, thus exacerbating social conflict online”. Lanius et al. (2021) discuss some ways to combat bots by creating detection algorithms and flagging potential bot-generated content to help limit the spread of disinformation. 

 

As discussed in this article, fake media outlets are becoming more advanced by using generative AI technologies and bots to increase the volume and spread of disinformation. Creating bots or using AI’s like ChatGPT is incredibly accessible to anyone, and this ease is dangerous when it comes to differentiating between what is real and fake on the Internet. However, there are ways to navigate through this sea of disinformation, which includes our Verify section, which provides tools and resources that can help engage with the media more carefully. Another way to combat AI-generated content is by using another generator such as Grover as AI models are familiar with each other's habits and traits so it can easily spot AI generated news articles with up to 92% accuracy, according to the developers. However, the general public might be wary of using more AI to find out if their content is generated and may be more inclined to using other methods to detect AI-generated content.

 

While the current state of affairs in relation to AI seems like something out of a techno-horror movie, there are options to mitigate the damage and help navigate the Internet better. As mentioned before, our Verify section provides media literacy tools and information on how to spot AI-generated content and fake media outlets to make navigating the Internet more reliable. To fully combat the usage of bots and AI for disinformation, politicians and international lawmakers would have to pass laws that would restrict generative AI engineers and put limitations and safeguards on what their technologies can and cannot generate. The European Commission has integrated its voluntary Code of Practice on Disinformation into Digital Services Act (DSA) to mitigate risks associated with disinformation and identify certain risks to reduce the spread and impact of disinformation (Jahangir, 2025). Acts like the DSA can make big tech platforms have more accountability, combat disinformation and content regulation while not impeaching on free expression. Providing tools and information resources to improve media literacy would also benefit the public and help them in recognising disinformation. 

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page