top of page

Liar’s Dividend: Framing Truth as False to evade accountability

  • Writer: Andjelija Kedzic
    Andjelija Kedzic
  • Jun 6
  • 2 min read

Updated: 6 days ago

Generative AI tools, bots, trolls, and citizens who fall prey to disinformation efforts all flood our digital spaces with disinformation, conspiracy theories, and narratives that aim to influence you and your decision-making process, confuse you, sow doubt in you, and cloud your thinking so you cannot make informed decisions. All this could lead you to amplify your doubt and become more skeptical about whether you are encountering reliable information, a deepfake, a meme generated by a troll factory, or a post by a bot.


This trend of citizens increasingly becoming skeptical of the information they find online is emerging parallel to societies becoming more polarized, divided, and doubtful, with trust in democratic institutions, the media, and science declining. This skepticism, both within the realm of social media spaces and gaming environments, as well as in legacy media, which in many instances may be justified, is often employed by various private and public actors who seek to exploit and influence public opinion, tuning it in alignment with their incentives and goals.

Within these dynamics, the rise of disinformation and synthetic material online (e.g., fake videos and image content) enables politicians and others to claim misinformation as a means to evade accountability, a phenomenon that is referred to as "misinformation about misinformation" (Schiff et al., 2024). Actors, particularly politicians, may simply lie about the authenticity of real content, thereby amplifying uncertainty. As Northeastern University professor Fallis (2020) writes, it may become “epistemically irresponsible to simply believe that what is depicted in a video actually occurred.” Therefore, the "liar's dividend" refers to the advantage that various actors could gain by spreading false information, or simply utilizing these dynamics to evade accountability by framing authentic content as non-authentic, capitalizing on an environment where an overwhelming flood of fake content makes it increasingly difficult to distinguish between what is real and what is fake. This phenomenon, coupled with growing public skepticism toward the media, is further intensified by the fact that a significant majority (86%) of online global citizens have been exposed to misinformation (Ipsos, 2019). 

The growing uncertainty about whether the content is authentic can be particularly damaging in times of crisis. For instance, the recent reporting on the Israel-Hamas conflict highlights how the mere possibility of AI-generated content leads people to dismiss genuine images, videos, and audio as inauthentic (Brennan Center for Justice, 2024). In times of global crises, such as the climate crisis, which requires an informed decision-making process and a collectively shared understanding of the consequences we may face if we don't act now, the escalation of uncertainty about the authenticity of content further obstructs citizens' right to make informed decisions. It also highlights the dangers posed by the unchecked use of AI tools in spreading disinformation, flooding internet spaces, and sowing confusion and doubt, clouding citizens' right to an informed decision-making process.

コメント


この投稿へのコメントは利用できなくなりました。詳細はサイト所有者にお問い合わせください。
bottom of page