top of page
Simian Practicalist

Prebunking, Cos Debunking Is Not Enough Lying

Given all the talk—particularly in the past few years—regarding “misinformation”, “debunking” and AI, this may be an appropriate time to mention “prebunking”. It is not a new concept and it is more-or-less what the term suggests.


A Practical Guide to Prebunking Misinformation

Google, not surprisingly, has a webpage dedicated to it. It is a collaboration between the University of Cambridge, BBC Media Action and Jigsaw (Google). There is even a 36-page “A Practical Guide to Prebunking Misinformation” first published in 2022.


The so-called guide is basically one of those self-indulgent, corporate-styled documents. The text is organized into two parts:

  • Part 1: “Why Prebunking”

  • Part 2: “How to Prebunk”


The motivation is to combat “misinformation” (whatever that is) because debunking apparently doesn’t always work.

…fact checks are challenging as they have not historically received much engagement: research on over 50,000 debunking posts on Facebook found that very few audiences exposed to misinformation actually interacted with fact-checking posts.

I don’t know the stats but I wonder how many fact-checks are correct. Perhaps that might have something to do with its lack of effectiveness, but that is admittedly conjecture on my part. Anyway, this is where prebunking comes in.

Pre-emptive approaches occur before people are exposed to misinformation and are commonly referred to as pre-emptive debunking or “prebunking.” While there are many different types of prebunking interventions, they are often based on inoculation theory. Prebunking messages build mental defenses for misinformation by providing a warning and counterarguments before people encounter it.

Note that “misinformation” is considered a disease and people can be infected with it.


As already mentioned, this idea is not new. This “inoculation theory” was developed in the 1960s by William McGuire, an American social psychologist. Whilst one does not want to be hypersensitive to analogies, sticking to the “inoculation” image in the context of today’s world is a little suspect.


The guide lists two parts of “inoculation”:

  1. Forewarning.

  2. Preemptive refutation—the exposure to predicted misinformation and its counter-arguments in advance.


There are two forms of prebunking:

  1. Misinformation narratives—addressing the information.

  2. Misinformation techniques—addressing the communication techniques rather than the information.


Regarding techniques, seven are listed with examples: impersonation, emotional manipulation, polarization, conspiratorial ideation, ad hominem attack, false dichotomy and false balance.


These techniques are commonly seen but some of the examples used in this section (and the whole document) are conveniently mainstream. Reproduced below from this section are examples for “impersonation” and “conspiratorial ideation”. I pick these two because they are the most blatant.


Impersonation
Conspiratorial ideation

How convenient that one is on climate change and the other is on vaccines. It is common for vaccines and the plandemic to be used as examples in this document.


And by the way, notice I just used the word “plandemic”, which is simplistic labelling, a common technique known as “slanting” which this guide conveniently does not explicitly mention or explain. It uses “conspiracy theories” or “conspiratorial” in the same way as if conspiracy theory is automatically wrong.


There are two general formats of prebunking:

  1. Passive—these include short and simple messages which are not interactive.

  2. Active—these include interactive games.


Whilst it does not go into details, it does mention as examples the UNESCO infographics and the game Go Viral! that address, again, the COVID pandemic.


Regarding prebunking messages, these include short animated videos which are typically two minutes each. Not surprisingly, the guide mentions the need for “shorter length digital content (e.g. 30 seconds or less)” on platforms such as TikTok. No doubt this is to engage more individuals, particularly those who are stuck to their phones and/or those with a shorter attention span.


Not that this document would discuss it but the increasing use of AI is an obvious concern for both passive and active prebunking in terms of content generation as well as tracking online activity. So far, this assumes open platforms such as social media. But what about “closed messaging platforms” such as WhatsApp?


Don’t worry, the authors have thought about it.

When the technology is specifically designed to be private, it is inherently difficult to understand trends and habits.

That’s the point, isn’t it?

It would be valuable to test what types of prebunking content best engage users of closed chat apps, what formats they may choose to share with others (to multiply the impact of the intervention), and what effect this has on the impact and spread of misinformation in closed messaging spaces (e.g. can inoculation theory content reduce user belief in misleading or false information shared by friends or family, or reduce the likelihood of users sharing such content with their own contacts?).

Right. Well, it is not a surprise to see the increasing number of “channels” available on WhatsApp. And one has to wonder to what extent our messages are truly private.


To finish part 1, the BBC also wants to get into long form such as TV and radio dramas or reality shows.

BBC Media Action’s experience has demonstrated that storytelling formats can be very useful in raising sensitive issues in a non-confrontational way, which is critical in societies where key power holders may be directly contributing to the spread of misinformation. However, to date, there has been no attempt to integrate prebunking approaches into such content.

It may be true that no contemporary research for applying this format specifically to prebunking has been done but stories and parables (including epic poems) have been told for a few thousand years. It’s nice to know today’s BBC has caught up.


Part 2 outlines the method. Although the presentation looks very pretty, the content is in essence a repeat of part 1 but in an instructional format so I will limit it to a few comments.

The guide does emphasize the use of experts and the like.

The information space is oversaturated with advice and disputes over accuracy. Before embarking on prebunking, ensure that you have the necessary and sufficient expertise to credibly address the misinformation in question. If needed, partnering with respected experts, scholars, and authoritative bodies can be a great way to demonstrate expertise.

Well, quoting from or even partnering with experts is very sensible… but how convenient it does not mention to provide a balanced view, to have a sufficient number of experts that represent the spectrum of views. To be consistent with the rest of the guide, this is a good place to use the plandemic as an example, that there are instances when only “experts” who push one side of the argument are used. How convenient this example is missing.


Section 2.3 about measuring the success of a given prebunking effort is practically pointless. First, one has to define “key metrics” and the guide provides some general examples such as “Consumption of misinformation (e.g. time spent on misinformation sources)”. These are fine in principle but then there is the issue of data collection.


The guide admits that the only way is self-reported surveys (unless one resorts to invasive means). The guide also admits that it is difficult to obtain data and that without a control group, there is no real comparison.


The other problem not mentioned is that behavior in everyday life as opposed to a control environment is more than “metrics”—I am not denying that they cannot be defined and quantified in such a way that be measured—but such a view of human behavior is reductionist. Related to this, the guide does admit that any measured changes may not be due to one’s prebunking effort.


Ultimately, and this is perhaps stating the obvious, the document is mostly about method. There is no mention of “objective truth” or “truth”. There is one instance of “true and false information” whereas “facts” is mostly used in the context of fact-checking.


Reading the guide in isolation, the content isn’t totally wrong. Methods of communication and argument are just that, methods. There is nothing wrong with learning them.


However, it is not just what one does but why. Given the recent and current events, there is enough in the guide to betray its intention to merely maintain the mainstream narrative. Seriously, can Google be trusted? Why should they care what and how the rest of us think? They obviously have way too much time. This is one step away from the combination of “thought crime” and “pre-crime”.


If one truly wants to learn about debunking or prebunking or arguing, then one is better off studying a good Socratic logic textbook. I recommend one by Peter Kreeft.

 

Be sure to subscribe to our mailing list so you get each new Opinyun that comes out!

 

Recent Posts

See All

Comments


Screen Shot 2021-12-09 at 4.49.31 PM.png

10% Off
Use Code: MERRYXMAS

MERCHANDISE!

bottom of page