Microtargeting: Disabling public consensus
- Andjelija Kedzic
- May 25
- 5 min read
Updated: 7 days ago
The process of harnessing personal data, the traces of our digital footprints that we leave online when using different digital services, emerges not only as a prominent advertising and marketing strategy but also a tool for strategic and political influence. By harnessing and selecting personal data of citizens into clusters (datafication), the distinct audiences, microtargeting allows for more personalized advertising, where messages are crafted so personalized so they may feel like whispers meant only for us.
Microtargeting allows different actors to market different types of content and messaging to different groups, which in turn could make it harder to achieve a common discussion, a consensus on certain issues within society. This approach not only carries profound data privacy implications but directly affects societal discourses and even democracy itself.
The practice of microtargeting operates through three key stages: data collection, profiling, and personalization. The process begins with obtaining vast amounts of personal data via a web of hidden trackers, social media platforms, voter registries, and even data brokers (known as information brokers, entities that collect personal data and sell it to third parties). Once obtained, this data undergoes the process of datafication, a process of segmentation where individuals are clustered into niche groups based on traits, backgrounds, and interests. The process is finalized with the creation of personalized, tailored messaging.
The ongoing romance of Microtargeting and Surveillance Capitalism
Living in the age of what Shoshana Zuboff calls “surveillance capitalism,” a widespread collection and commodification of personal data by corporations (e.g., Google, Meta, X, Data Brokers, etc.) for monetary purposes, the process of microtargeting relies on an even wider hidden data ecosystem, consisting of hundreds of companies that one has most likely never heard of or interacted with. Targeted advertising is significant as it is more effective in influencing citizens' opinions and their decision-making processes. An MIT study found that tailoring political ads based on one attribute of their intended audience (for instance, party affiliation) can be 70% more effective in swaying policy support than using a single advertisement shown to everyone. These findings indicate that microtargeting could certainly be a dream tool for influencing public opinion.
The microtargeting, was, and is enabled by the legally enabled “surveillance capitalism”. The “surveillance capitalism” emerged in the 2000s with Google pioneering an advertising business model that would propel its monetization at 3590% and as profitable led to the wide application of “surveillance dividend” within the U.S. market. The U.S. market was therefore met with the wide application of “surveillance capitalism,” characterized by the corporate-driven commodification and translation of free human experience “digital exhaust” left when interacting with digital services that are not needed for offering service to users, into profitable products. This broad expansion of “surveillance capitalism” went largely unchallenged at the institutional level. While the Federal Trade Commission (FTC) issued data privacy recommendations in the 2000s following Google's lead in the advertising business model, Zuboff questioned whether “surveillance capitalism” would have been legal if the FTC had pursued its original proposed policy actions.
Zuboff argues that the lack of U.S. institutions to act was due to the increased securitization and protectionist measures followed by the 9/11 attacks, recalling Peter Swire's words (previously chief of counselor for Privacy in the Clinton Administration) that Congress at that time “lost interest in regulating information usage in the private sector” as the focus was rather on “security than on privacy.” Today, the U.S., under both Biden and Trump ruling, fails to regulate and protect its citizens' data privacy rights, switching from security to innovation justification, lacking federal data privacy law similar to the EU GDPR in the 21st century, while AI developers are allowed to access and use a wide range of data to train their models without sufficient protections for citizens.
At the core of this all-seeing ecosystem lie trackers, the digital footprints that we endlessly leave online. One of the well-known culprits are cookies. While, another method, called fingerprinting, does not rely on cookies, instead, it collects details about your device and browser (e.g., what kind of computer you are using, screen size, browser version) for the sake of recognizing you. And no, these data points that are collected on you and marketed as being anonymized are not particularly anonymous. The illusion of anonymity was shattered by researchers from Imperial College London and Université Catholique de Louvain, in Belgium, who reported in the journal Nature Communications that they devised a computer algorithm capable of identifying 99.98 percent of Americans from almost any available data set with as few as 15 data points, such as gender, ZIP code, or marital status.
Fragmenting the discourse
By hyper-personalizing messaging in order to sway opinions and segmenting audiences into echo chambers, the possibility for common discourse seems impossible, while we desperately need public consensus on climate change issues. This could have significant consequences, as different groups receive tailored divisive narratives, fostering polarizing realities that could make consensus on climate change increasingly difficult to be achieved.
How to evade ever-present observers?
To evade ever-present silent observers and reduce chances of being affected by microtargeted advertisement, here are effective ways to minimize online data tracking:
Clear Cookies Regularly: Delete browsing data to minimize trackers.
Use Privacy-Oriented Browsers: Opt for platforms like DuckDuckGo or Brave.
Install Ad Blockers: Tools like uBlock Origin and NordVPN Threat Protection can thwart invasive ads.
Use VPNs: Mask your online footprint by routing internet traffic through virtual private networks.
Adjust Social Media Settings: Limit data-sharing permissions on platforms like Facebook, which collects everything from deleted messages to device metadata.
To understand what social media platforms actually collect about you, here is an example of the type of data Facebook collects:
Messages: Messages you send and receive (even when you delete them)
Metadata: Metadata about content and messages (Metadata provides context with details such as source, type, owner, for example, a digital image's metadata includes resolution, size, color, depth, and time of creation.)
Types of content: What types of content, apps, and features you view or interact with
Device information
Camera roll content: Content you provide through Meta’s camera feature or your camera roll
Your location
Interaction: Information about the people, accounts, hashtags, and Facebook groups and Pages you are connected to and how you interact with them across Meta’s products
Synced information: Information that you choose to upload, sync, or import from a device (e.g., pictures, videos, address book, call log, or SMS log history)
Device information: (device attributes: operating system, hardware and software versions, battery level, signal strength, available storage space, browser type, app and file names and types, plugins; device operations: information about operations and behaviors on the device, such as whether a window is foregrounded or backgrounded, or mouse movements (which can help distinguish humans from bots); identifiers: unique identifiers; IP address, connection speed, information about nearby Wi-Fi access points, beacons, cell towers, and in some cases information about other devices that are nearby or on your network)
Data collection extends far beyond social media, with smartphones tracking app usage, GPS locations, and ads, wearables monitoring heart rates and sleep, and smart home devices recording voice, behaviors, and preferences. This pervasive surveillance highlights the urgent need for transparency and accountability in data practices, especially when data can influence public opinion.
Comments