30
Incorrect Knowledge or Miseducated Guesses? Evaluating Survey Measures of Political Misperceptions Matthew H. Graham July 15, 2020 PRELIMINARY DRAFT Are survey respondents who provide the incorrect answer to factual questions misinformed? Across a wide range of political topics, I show that the average respondent who answers incorrectly is substantially less certain about their answer than the average respondent who answers correctly. “Don’t know” response options do little to close this gap. Even among those who claim to be absolutely certain, a latent belief difference exists. Using panel data and a revealed preference measure, I show that when beliefs are elicited a second time, the average respondent who initially claims to be absolutely certain of the correct answer often indicates 90 percent certainty on a 50 to 100 scale. For those who claim to be certain of the incorrect answer, the second measure rarely tops 75 percent certainty. Claims to be certain of the wrong answer to factual questions about politicized controversies are no more robust than claims to be certain that the incorrect party controls the U.S. House of Representatives. Incorrect answers represent a mix of blind and “miseducated” guesses, but even those who claim to be certain of the wrong answer are not generally misinformed about the facts.

Incorrect Knowledge or Miseducated Guesses? Evaluating

  • Upload
    others

  • View
    12

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Incorrect Knowledge or Miseducated Guesses? Evaluating

Incorrect Knowledge or Miseducated Guesses?

Evaluating Survey Measures of Political Misperceptions

Matthew H. Graham

July 15, 2020

PRELIMINARY DRAFT

Are survey respondents who provide the incorrect answer to factual questions misinformed? Acrossa wide range of political topics, I show that the average respondent who answers incorrectly issubstantially less certain about their answer than the average respondent who answers correctly.“Don’t know” response options do little to close this gap. Even among those who claim to beabsolutely certain, a latent belief difference exists. Using panel data and a revealed preferencemeasure, I show that when beliefs are elicited a second time, the average respondent who initiallyclaims to be absolutely certain of the correct answer often indicates 90 percent certainty on a 50 to100 scale. For those who claim to be certain of the incorrect answer, the second measure rarely tops75 percent certainty. Claims to be certain of the wrong answer to factual questions about politicizedcontroversies are no more robust than claims to be certain that the incorrect party controls the U.S.House of Representatives. Incorrect answers represent a mix of blind and “miseducated” guesses,but even those who claim to be certain of the wrong answer are not generally misinformed about thefacts.

Page 2: Incorrect Knowledge or Miseducated Guesses? Evaluating

Political misinformation is on the rise. Political leaders appear to feel increasingly untethered to any

standard of truth, and malicious actors frequently promote outright false information, especially in online

media. For research in this area, a basic descriptive question concerns the prevalence of misperceptions

or the frequency with which members of the public are misinformed. Researchers frequently use survey

data to examine this question, but rarely attempt to distinguish respondents who have been misinformed

from those who hazard an uncertain guess. For the minority of studies that draw such a distinction

(e.g., Kuklinski et al. 2000; Pasek, Sood and Krosnick 2015), survey methodological research provides

little insight as to whether claims to be absolutely certain about the wrong answer represent “incorrect

knowledge” (Hochschild and Einstein 2015, 10), “miseducated guesses,” or mental coin flips.

This paper evaluates the degree to which respondents who offer incorrect answers to factual ques-

tions about politics are misinformed. The evaluation begins by providing new evidence on the predomi-

nant analytic posture in research on the prevalance of political misperceptions, which does not distinguish

highly certain beliefs from educated guesses or random responding. The results show that the typical

incorrect response is stated with substantially less confidence than the typical correct response, and is

closer to a coin flip than to incorrect knowledge. This is true even in categories in which misinformation

and misperceptions are widely expected — politicized controversies (e.g., Barack Obama’s birthplace),

social and economic conditions (e.g., the unemployment rate), and the government’s fiscal situation

(e.g., foreign aid spending). Neither splitting the results by party nor providing a “don’t know” option

meaningfully alters these results.

In light of concerns that “respondents frequently answer factual questions even when they do not

know the answers,” researchers on misperceptions and misinformation have shown increasing interest in

“distinguishing the genuinely misinformed from the guessing uninformed” by “ask[ing] about people’s

confidence in their answers to factual questions” (Kuklinski et al. 2000, 793). To check that certainty

scales capture meaningful variation in certainty, the paper presents a three-part validation exercise that

shows that certainty scales reliably predict accurate responses, response stability, and a measure that

reveals respondents’ probabilisic beliefs through costly choices.

Despite the general promise of certainty scales, the validation exercise also highlights a previously

unrecognized feature: they are much better at identifying respondents who are informed than they are

at identifying respondents who have been misinformed. When beliefs are measured a second time, the

average respondent who claimed to be absolutely certain of the correct answer often assigns a probability

of 0.90 or more to their original choice (on an 0.5 to 1 scale). But for the average respondent who claimed

to be absolutely certain of the incorrect answer, this figure rarely rises above 0.75. The average respondent

who claims to be certain that the wrong party controls the House of Representatives displays a stronger

tendency to stick with their original response than the average respondent who claims to be certain that

Barack Obama has not released his birth certificate. Even those who claim to be certain of incorrect

1

Page 3: Incorrect Knowledge or Miseducated Guesses? Evaluating

answers are largely making spontaneous, miseducated guesses, not revealing incorrect knowledge.

These results highlight a fundamental gap between misinformation in the information environment

and research that classifies respondents who answer incorrectly as “misinformed.” Anecdotal evidence

makes clear that some people genuinely believe outright false, apparently-factual claims. Yet as a tool

for generalizing from these anecdotes, surveys are a remarkably unreliable instrument, inflating both the

apparent prevalence of misperceptions and the apparent depth at which respondents hold them. These

inconvenient features of survey technology have broad implications not just for research that attempts

to document the degree to which the public has been misinformed, but for other research that leans on

strong interpretations of incorrect answers, including research on the predictors of misperceptions and

conspiracy beliefs, efforts to correct pre-existing misperceptions, and studies of the downstream effects

of corrective information on attitudes.

Survey Measures of Misinformation

Survey-based studies of political misinformation often treat all correct and incorrect answers to

survey questions as evidence that the respondent has been informed or misinformed (e.g., Delli Carpini

and Keeter 1996; Hochschild and Einstein 2015). A critique of this approach holds that it fails to

distinguish the misinformed from the uninformed. This perspective originates with Kuklinski et al.

(1998, 2000) and Hofstetter et al. (1999), who wrote at a time when misinformation was only beginning

to be recognized as a problem in American politics. To distinguish the “guessing uninformed” (who “do

not hold factual beliefs at all”) from the “genuinely misinformed” (who “firmly hold beliefs that happen

to be wrong”), Kuklinski and colleagues recommend asking respondents how certain they are about their

answers (Kuklinski et al. 2000, 793). As concern over political misperceptions has grown, scholars have

increasingly taken interest in respondents’ confidence in their answers to factual questions (e.g., Flynn

2016; Graham 2020a; Jerit et al. 2020; Lee and Matsuo 2018; Marietta and Barker 2019; Pasek et al.

2015; Peterson and Iyengar 2020). Luskin et al. (2018) call methods that focus on those who claim to

be certain a “24 carat gold standard” for identifying misinformed respondents.

In what sense have respondents who claim to be confident about the wrong answer been misin-

formed? One possibility is that they have been misinformed about the facts themselves. This is consistent

with the bulk of research on misinformation in the information environment, which defines misinforma-

tion in terms of outright false, purportedly-factual claims and showcases such claims in its motivating

examples (e.g., Lewandowsky et al. 2005, 2012; Thorson 2015a; Swire et al. 2017).1 Consistent with

research on misinformation in the information environment, respondents who are certain and wrong are

said to “firmly hold the wrong information” (Kuklinski et al. 2000, 792) or “hold a false or unsupported

belief about the answer” (Flynn et al. 2017, 127). At face value, these accounts — as well as the respon-

1These definitions often encompass claims that are presented as true, but later turn out to be false.

2

Page 4: Incorrect Knowledge or Miseducated Guesses? Evaluating

dents’ own claims to be absolutely certain about their answer — would seem to indicate that certainty

scales can bridge the fundamental divide in survey research between respondents’ real-world beliefs and

their answers to survey questions. Under this interpretation, respondents who answer incorrectly“hold

misinformation” as a sort of “incorrect knowledge” (Hochschild and Einstein 2015, 10-11) that they later

reveal in response to a survey question

Notwithstanding the substantial degree of focus on outright false claims, authoritative definitions

of misinformation also encompass information that is misleading (e.g., Lazer et al. 2018). Accounts

that describe respondents who claim to be certain about incorrect answers as “misinformed” leave a

role for “inferential reasoning” (Hofstetter et al. 1999, 353), “mistaken inferences” (Flynn et al. 2017,

128) and “translat[ing] ... general notions into more specific ones” (Kuklinski et al. 2000, 795). This

raises the possibility that respondents who claim to be certain of the incorrect answer may be making

“miseducated guesses” that are not actually a product of false information about the fact in question. No

prior research provides an evidence-based evaluation of the difference between these two interpretations,

either for incorrect answers in general or for incorrect answers about which the respondent claims to be

certain.

How could mistaken inferential reasoning lead respondents to claim absolute certainty about factual

beliefs that they did not hold before entering the survey environment? It is widely recognized that

surveys do not generally elicit “decided,” “fixed,” or “true” beliefs that existed before the survey (Flynn,

Nyhan and Reifler 2017, 140; Bullock and Lenz 2019, 326; Berinsky 2017, 317). Instead, surveys induce

respondents to construct beliefs on the spot by “sampling” their perceptions and integrating them into

a summary judgment (Nadeau and Niemi 1995; Tourangeau et al. 2000; Strack and Martin 1987). A

little-recognized implication of such accounts is that the process of “sampling” considerations can affect

the extent to which respondents appear to be certain about their answers. Zaller and Feldman (1992)

describe such a case:

Consider respondent A. His first reaction to a guaranteed standard of living was that it was incon-sistent with American ideals; he was also bothered by the unfairness of supporting those who refuseto work. Yet he worried about letting individuals get ahead on their own, saying that some peopleneed special help and that society has an obligation to help the needy. In the second interview,however, there was no sign of this ambivalence. Respondent A gave six reasons why individualsought to get ahead on their own, including a restatement of his feeling that job guarantees areun-American, without raising any opposing considerations. Respondent A ... went from being anambivalent conservative on this issue to being a confident conservative. (594-596)

A similar phenomenon may affect the extent to which respondents appear certain of the wrong

answers to factual questions. Alongside a set of questions concerning matters on which researchers expect

partisan-biased misperceptions, Graham (2020a) measures respondents’ confidence in their answers to

general knowledge questions of the type that have traditionally been used to measure political awareness.

Despite the face implausibility that many respondents would be exposed to outright false information

about such facts, about one in every ten respondents selected an incorrect answer and indicated that they

3

Page 5: Incorrect Knowledge or Miseducated Guesses? Evaluating

were “very” or “absolutely” certain about it. Graham (2020a) attributes this to respondents falling into

“traps” set by the response options, e.g., “Nancy Pelosi as the Senate Minority Leader (instead of Chuck

Schumer)” and “the filibuster as the Senate procedure to make budget changes via a simple majority

(instead of reconciliation)” (318). Even as this provides some evidence that claims to be certain can reflect

mistaken inferences, this interpretation ultimately relies on the face implausibility that respondents are

misinformed about these facts. It could be that in other cases, respondents hold false beliefs about the

facts themselves. Examining questions about facts on more contested issues, e.g. the economy, social

conditions like health care and gun violence, and the fight against international terrorism, Flynn (2016)

and Graham (2020a) each find that claims to be certain of the incorrect answer are more prevalent.

The lack of concrete evidence as to whether respondents who answer incorrectly have been mis-

informed leaves a potentially fundamental gap between research on misinformation itself and research

that identifies members of the public as having been misinformed. Accordingly, this paper tackles three

under-studied empirical questions. First, do certainty scales actually measure certainty? Second, does

the average incorrect answer reflect misperceptions or misinformation to the same degree that correct

answers reflect accurate perceptions or knowledge? Today, assumptions about whether respondents who

answer incorrectly are ignorant or misinformed are largely made ad hoc. Third, how successful is the

approach of focusing on respondents who claim to be certain of the incorrect answer? To the extent it

succeeds, does it identify respondents who actually feel they possess knowledge, or does it simply identify

respondents who are a bit more confident in their miseducated guesses?

All of these questions are distinct from another, more-studied sense in which respondents may not

believe their answer, that respondents may choose a response other than their sincere best guess as a way

of expressing partisan sentiments (Berinsky 2018; Bullock et al. 2015; Prior et al. 2015). Evidence that

respondents are, or are not, sincerely reporting their best guess does not provide any direct information

as to their level of certainty.2

Research Design

To examine these questions, I draw upon nine online surveys conducted between 2017 and 2020.

Respondents were recruited through Amazon Mechanical Turk (MTurk), Lucid, and YouGov. Each

survey included a series of factual questions about politics. Questions were drawn from five categories:

• General knowledge. These questions cover facts that have traditionally been used to measurepolitical awareness, including knowledge about institutional rules (e.g., the length of a Senate term),individual officeholders (e.g., what is John Roberts’ job), and party control of political institutions(e.g., which party controls the U.S. House of Representatives). As no extant scholarship argues infavor of misperceptions or misinformation about these topics, they serve as a useful benchmark.

2Although it is sometimes speculated that expressive responding is concentrated among respondents who are uncertain(e.g., Bullock et al. 2015; Bullock and Lenz 2019), the results in Graham (2020b) and Peterson and Iyengar (2020) castdoubt on this conjecture, at least in its extreme form.

4

Page 6: Incorrect Knowledge or Miseducated Guesses? Evaluating

Table 1: Survey information.

Questions Responseoptions

Certaintyscale

Follow-upmeasure

Anti-cheatingstrategyDate N Vendor G E S F C

Sep. 2017 898 Lucid 8 8 6 2 - 3 Subjective None PledgeMay 2018 619 MTurk - - - - 1 2 Probabilistic None Pledge, catchNov. 2018 988 YouGov - 6 5 2 1 3 Subjective None PledgeJune 2019 1244 MTurk - 3 1 - 4 2 Probabilistic Stability Pledge, catchJuly 2019 2438 Lucid 9 3 1 - 4 2 Probabilistic None PledgeOct. 2019 631 MTurk 3 - - 1 - 2 Probabilistic Stability Pledge, detectDec. 2019 4125 Lucid - 4 1 1 1 2 Probabilistic None Pledge, detectFeb. 2020 532 Lucid 4 4 1 2 - 2 Probabilistic Revealed Pledge, detectMar. 2020 939 MTurk 4 3 - 1 4 2 Probabilistic Revealed Pledge, detect

Total 12414 28 31 15 9 15

• Economic conditions. These questions cover facts that are used to measure economic perceptions,with a focus on statistics that make frequent appearances in the literature (e.g., unemploymentand inflation).

• Social conditions. These questions cover facts about social conditions that are not focused oneconomic conditions (e.g., the crime rate, the abortion rate, and levels of immigration). Thisincludes topic areas that have a clear economic component, but also involve social programs orsocial identities (e.g., the health insurance rate, food stamp use, and gender- and race-based incomegaps).3

• Fiscal situation. These questions cover facts about the federal budget. All of the questions areabout three topics: the federal budget deficit, the national debt, and federal spending on foreignaid.

• Controversies. These questions cover facts about politicized controversies. In most cases, claimshave been made in the public sphere about the specific fact in question that were demonstrablyfalse when they were made (e.g., that Barack Obama never released his birth certificate) or werepresented true but later turned out to be false (e.g., that Robert Mueller had evidence that PresidentTrump personally colluded with Russian agents during the 2016 election). In a few cases, theyconcern controversies whose factual basis is not disputed (e.g., Hillary Clinton’s use of a privateemail server when she was Secretary of State).

A breakdown of the question categories represented in each survey appears in Table 1. Although re-

search on these various categories of questions often proceeds separately, presenting them side-by-side

aids interpretation by showcasing important similarities and differences between them. The individual

questions are introduced in more detail as the results are presented, and the appendix provides the full

text of each question.

After respondents chose their best guess on each question, a certainty scale appeared just below

the question on the same screen. In seven of the surveys, each question had two response options; for

these surveys, the certainty scales were given a probabilistic interpretations using both numerical labels

(e.g., 50 to 100 percent certain) and subjective statements (e.g., “pure guess” to “absolutely certain”).

In the other two surveys, each question had three response options, and the certainty scales were labelled

only in terms of subjective statements.

3Classifying the questions this way helps keep the economic category more clearly comparable to previous research. Inpractice, the two categories are quite similar, and will often be combined below.

5

Page 7: Incorrect Knowledge or Miseducated Guesses? Evaluating

As the results are intended to be applicable to surveys that do not include explicit measures of

certainty, it was important to provide some assurance that the act of stating one’s certainty level did not

systematically alter respondents’ best guesses. Consequently, in two of the surveys, a group of randomly-

selected respondents was assigned to answer the questions without a certainty scale, as they would on

a standard survey. In neither case did the certainty scale appear to systematically affect responses (see

the appendix).

To further probe the trustworthiness of respondents’ claims to be certain or uncertain, four of the

surveys featured second measurements of the respondent’s beliefs about the same questions. Two, the

MTurk June 2019 and MTurk October 2019 surveys, were panel surveys that recontacted respondents and

asked the same questions at a later date. The second measurement permits the computation of measures

of response stability, the canonical metric for the degree to which survey responses measure attitudes or

beliefs (e.g., Converse 1964, 1970; Ansolabehere et al. 2008). To the extent that respondents’ claims to

be certain or uncertain reflect their underlying perceptions, they should, on average, make a similarly

certain inference when asked to do so at some later point in time. To the extent that respondents’ true

underlying perceptions are less dispositive than they claim, they should display a tendency to “back

off” their initial claims of certainty, much as the largest outliers in a probability distribution have a

well-known tendency to regress to their mean.

The other two surveys with second measurements, both conducted in 2020, use a different validation

strategy with the same basic interpretation. In these surveys, respondents completed a discrete choice

task that revealed their beliefs through a series of costly choices. To make this concrete, consider a

question about whether unemployment increased or decreased over the past year. Respondents were

first asked to choose whether they would rather win a bonus if unemployment increased or win the same

bonus if unemployment decreased. Next, respondents chose between winning $100 if their answer was

correct and a 6 in 10, 7 in 10, 8 in 10, 9 in 10, and 99 in 100 chance of winning the same bonus. The

point at which the respondent “crosses over” from preferring payment for a correct answer to preferring

the lottery is an indicator of how likely they think their answer is to be correct (Hill 2017; Holt and

Smith 2016). As the reward is held constant, tasks of this type have an important theoretical advantage:

the only difference in the expected payoff is the respondent’s self-assessed probability that their answer

is correct, making it robust to risk aversion. At the end of each survey, one respondent was randomly

selected to be eligible for the bonus based on their choices.

The revealed belief task offers two advantages over traditional measures of response stability. First,

as a distinct task that involves costly decisions, it has a great deal of validity as a measure of beliefs.

Second, because it is elicited in the same survey, the revealed belief task mitigates the concern that

response instability could be caused by learning between the two surveys.

Even in a single-shot survey, respondents could still learn the answer by looking it up. To mitigate

6

Page 8: Incorrect Knowledge or Miseducated Guesses? Evaluating

this concern, the four most recent surveys included a snippet of Javascript that monitored whether the

survey remained visible on the respondent’s screen. All cases of possible information search are dropped

from the analysis. The remaining surveys also took steps to discourage and detect information search.

Prior to the beginning of the factual question, every survey included a pledge not to look up the answers.

Two of the surveys also included “catch” questions that respondents would be unlikely to answer correctly

without looking up the answer. Clifford and Jerit (2016) show that these methods successfuly deter and

detect information search.

Do Certainty Scales Measure Certainty?

To be useful in research on misperceptions, certainty scales must capture meaningful variation in

the degree to which respondents have been exposed to information, as well as the degree to which they

are genuinely certain or uncertain about their answer. This section presents the results of a three-part

validation exercise that confirms that certainty scales accomplish both goals. All of the analysis in this

section pools across the question categories listed above.

Does Certainty Predict Accuracy?

To examine the degree to which certainty scales capture meaningful variation in information expo-

sure, I examine the relationship between certainty and accuracy on general knowledge questions, defined

above as the trivia-style questions about institutions and political figures that have traditionally been

used to measure political awareness. The advantage of these facts is that respondents are unlikely to have

encountered false information about them during the course of their daily lives. Consequently, to the

extent that certainty scales capture meaningful variation in information exposure, it should be the case

that respondents who are absolutely certain almost always report the correct answer, while respondents

who are completely uncertain are no more accurate than chance.

The results of this test show that certainty is a good predictor of information about general knowl-

edge questions. For each of the five surveys that included general knowledge questions, Figure 1 displays

the conditional relationship between certainty and accuracy. The September 2017 Lucid survey used a

five-point subjective scale similar to that used in most extant misinformation research (e.g., Kuklinski

et al. 2000; Pasek et al. 2015). Respondents stating the minimum level of certainty were not much more

accurate than chance, while respondents who claimed to be absolutely certain were correct 85 percent

of the time.

The other four surveys used labelled 50 to 100 scales that are interpretable in probability units.

In all four surveys, respondents who stated probabilities between 0.5 and 0.59 chose the correct answer

between 49 and 54 percent of the time, while respondents who stated a probability of 1 chose the correct

answer between 89 and 96 percent of the time. MTurk respondents seem to report their certainty

7

Page 9: Incorrect Knowledge or Miseducated Guesses? Evaluating

Figure 1: Probability of correct answer by certainty level, general knowledge questions.

●● ●

●●

● ● ●

● ●

● ●

Lucid, Sep. 2017 Lucid, July 2019 MTurk, Oct. 2019 Lucid, Feb. 2020 MTurk, Mar. 2020

1 2 3 4 5 .5 .6 .7 .8 .9 1 .5 .6 .7 .8 .9 1 .5 .6 .7 .8 .9 1 .5 .6 .7 .8 .9 10.4

0.6

0.8

1.0

0.4

0.6

0.8

1.0

0.4

0.6

0.8

1.0

0.4

0.6

0.8

1.0

0.2

0.4

0.6

0.8

1.0

Certainty

Pr(

corr

ect)

Note: For each of the five surveys that included general knowledge questions, this figure displays the probability of answeringcorrectly (y-axis) conditional on the respondent’s stated certainty level (x-axis). The September 2017 and July 2019 surveysused discrete certainty scales; respondents could only choose the values displayed on the x-axis. The other surveys usedquasi-continuous scales that allowed respondents to select any integer between 50 and 100. These measures are binnedusing the following groups: 0.5; [0.51, 0.59]; [0.6, 0.69]; [0.7, 0.79]; [0.8, 0.89]; [0.9, 0.99]; 1. The appendix presents allestimates in tabular form.

level slightly more reliably than Lucid respondents: while the October 2019 and March 2020 surveys

saw respondents who claimed to be absolutely certain answer correctly 96 and 95 percent of the time,

respectively, equally certain respondents in the July 2019 and February 2020 suveys were correct 92 and

89 percent of the time.

To summarize the relationship displayed in each figure, I used OLS to estimate

Correctij = α+ βcij + εij , (1)

where i indexes respondents, j indexes questions, Correctij is an indicator variable for choosing the

correct answer, and cik is the respondent’s certainty level. c is scaled to range [1/k, 1], where k is the

number of response options on the question. Estimates of the slope parameter, β, check the visual

impression that the certainty scale measures the extent to which the respondent is informed about the

fact. The closer β is to 1, the better the measure. The slopes for the three Lucid surveys were 0.72, 0.79,

and 0.68, while the slopes for the two MTurk surveys were 0.99 and 0.92 (for regression tables, see the

appendix).

For this set of questions, the strong relationship between certainty and accuracy suggests that

certainty scales capture meaningful variation in the respondent’s level of information. However, because

the expectation that accuracy should correspond depends on an assumption that respondents were not

exposed to false information, this strategy cannot be used to evaluate certainty scales on questions where

misinformation or misperceptions are expected. To evaluate the scales on more politically controversial

topics, I turn to two strategies that tap certainty more directly.

Does Certainty Predict Response Stability?

The idea that survey respondents who really believe something should choose the same response

in a second survey has a long history in public opinion research (e.g., Converse 1964, 1970). In this

8

Page 10: Incorrect Knowledge or Miseducated Guesses? Evaluating

context, we care not just about whether respondents make the same best guess when asked a second

time, but whether they indicate an equal level of certainty. To the extent that respondents genuinely

assign probablility p to their answer, they should do so again when asked a second time, allowing for

some attenuation due to measurement error.

Formally, let ait ∈ {0, 1} be respondent i’s answer choice at time t, and cit ∈ [0.5, 1] be the

probabilistic certainty level that they assigned to this response at time t. In the initial measurment

taken at t = 1, the respondent’s certainty level is equivalent to the probability they assign to their

answer: ci = pinitiali1 . In the second measurement, taken at t = 2, the probability the respondent assigns

to their initial t = 1 response, pinitiali2 , is

ci2 if ai1 = ai2

1− ci2 if ai1 6= ai2. (2)

If beliefs were measured without error, it would always be the case that ci1 = pinitiali2 (see the appendix).

To summarize the relationship between certainty and stability, I compute a similar slope statistic

by using OLS to estimate

pinitiali2k = α+ βci1k + εik, (3)

where values of β closer to 1 indicate greater correspondence, on average, between the beliefs stated in

wave 1 and wave 2.

Two of the surveys included follow-up surveys with identical measures of the respondent’s beliefs.

Respondents to the October 2019 survey were recontacted after one month to answer the same four

questions, three general knowledge items (party control of the Senate, the country Angela Merkel leads,

and the country Viktor Orban leads) and one fiscal item whose correct answer has been the same for

the past five-plus decades (whether more money is spent on Medicare or Foreign Aid). Respondents to

the June 2019 survey were recontacted after one year to provide a second response to four questions

about political controversies (Barack Obama’s release of his birth certificate, Donald Trump’s ‘grab

them’ comment, Hillary Clinton’s use of a private email server while Secretary of State, and Robert

Mueller’s conclusions about Trump’s personal collusion with Russia). Public figures have made outright

false claims about three of these facts, while Clinton’s use of a private email server is not disputed.

Figure 2 displays the results separately for these three categories of questions. The x-axis displays

the certainty level stated in wave 1, while the y-axis displays the average probability assigned to the initial

(wave 1) response when asked again in wave 2. On the general knowledge questions, certainty was highly

predictive of response stability. Among those who assigned a probability of 0.50 to 0.59 to their best

guess in the wave 1 survey, the average respondent assigned a probability of 0.54 to this same response

when recontacted a month later. Among those who had initially stated a probability of 1, the average

9

Page 11: Incorrect Knowledge or Miseducated Guesses? Evaluating

Figure 2: Response stability by certainty level.

Slope = 0.92 (0.04)

● ●

Slope = 0.60 (0.09)

●● ●

Slope = 0.53 (0.06)

●●

General knowledge(MTurk, Oct.−Nov. 2019)

Fiscal(MTurk, Oct.−Nov. 2019)

Controversies(MTurk, June 2019−2020)

.5 .6 .7 .8 .9 1 .5 .6 .7 .8 .9 1 .5 .6 .7 .8 .9 1

0.5

0.6

0.7

0.8

0.9

1.0

0.5

0.6

0.7

0.8

0.9

1.0

0.5

0.6

0.7

0.8

0.9

1.0

Certainty, wave 1

Belief ininitial

response,wave 2

Note: Conditional on the certainty level the respondent stated in wave 1 (x-axis), this figure displays the average probabilityrespondents assigned to their wave 1 response when asked again in wave 2 (y-axis). The dashed grey line indicates therelationship that would realize in the absence of measurement error. These measures are binned using the following groups:0.5; [0.51, 0.59]; [0.6, 0.69]; [0.7, 0.79]; [0.8, 0.89]; [0.9, 0.99]; 1.

probability in the second wave was 0.97, suggesting minimal exaggeration of the degree to which these

respondents really thought they knew the answer. The slope estimate, β̂ = 0.92 (s.e. = 0.03), indicates

a tight correspondence between certainty and response stability on general knowledge questions.

Certainty was also a reliable predictor of stability in the other two categories, though to a lesser

degree. On the fiscal question, respondents who claim to be more certain in the wave 1 survey consis-

tently assigned greater probability to that same response when asked again in wave 2. Relative to the

general knowledge questions, one distinctly different result emerges: the average respondent who stated

a probability of 1 in the first wave only assigned a probability of 0.8 to this same response when asked

again in wave 2. The slope, β̂ = 0.60 (s.e. = 0.09), similarly captures the solid but weaker correspondence

between certainty and stability on these questions.

A similar pattern emerges on the questions about political controversies. Certainty is a strong

predictor of response stability, but the average respondent who stated probabilities of 1 in wave 1 regressed

to 0.87 in wave 2. Further, when contacted a year later, respondents who initially said that they were

almost completely uncertain (probabilities between 0.5 and 0.59) also display a modest tendency to

put more stock in their answer than they initially claimed. This pattern is driven by the questions

about Clinton’s emails and Trump’s “grab them” comments, which respondents overwhelmingly answer

correctly. The average initially-uncertain respondents assigned a probability of 0.76 and 0.66 to their

initial response on these questions, compared with 0.60 and 0.57 on the two questions that required

respondents to reject the false claims that Obama refused to release his birth certificate and that Mueller

reported personal collusion between Trump and Russia. The slope statistic for the controversy questions,

β̂ = 0.52 (s.e. = 0.06), is somewhat attenuated by this pattern.

These results provide assurance that respondents’ claims to be certain capture a substantial amount

of genuine variation in certainty. Even so, response stability has two key shortcomings. First, because

10

Page 12: Incorrect Knowledge or Miseducated Guesses? Evaluating

response stability takes the same measure twice, one must take for granted that the construct that is

stable is, in fact, certainty. To the degree that a measure of certainty also captures systematic variation

in some other construct, stability may not be evidence that the scales measure certainty itself. Second,

because the surveys are conducted at two different points in time, instability can occur due to genuine

changes in beliefs. This is especially worrisome for questions about social and economic conditions, whose

correct answers can change frequently and on which respondents make frequent use of their day-to-day

experiences in formulating their responses (Graham 2020a).

Does Certainty Predict Revealed Belief?

To address the shortcomings of response stability, I now turn to an alternative strategy for exam-

ining the extent to which certainty scales measure certainty. In the February and March 2020 surveys,

all respondents’ probabilistic beliefs were measured a second time using a series of costly choices. This

strategy adds evidence of construct validity: while talk is cheap, costly choices give respondents an in-

centives to accurately reveal their beliefs. Moreover, because both measures were collected in surveys

that included a technology that detects likely cases of information search (see above), there is minimal

concern that genuine changes in underlying beliefs, due either to learning or information search, distort

the results.

The February and March 2020 surveys had considerable, but not perfect, overlap in terms of

their questions in three categories: general knowledge, economic conditions, and the fiscal situation.

To facilitate more direct comparison between them, analysis of these categories restricts attention to

questions that were included in both surveys: in the general knowledge category, the job or political

office held by two political figures (John Roberts and Jerome Powell), party control of the House of

Representatives, and the difference between reconciliation and the filibuster; in the economic conditions

category, the change in unemployment, inflation, and GDP in the year prior to the survey; and in the

fiscal category, foreign aid spending and change in the federal budget deficit. The March 2020 survey

also featured four questions about politicized controversies: Trump’s “grab them” comment, Robert

Mueller’s findings on Trump’s collusion with Russia, Trump’s claim of unlimited power under Article II,

and Obama’s preemptive admission that the Deferred Action for Parents of Americans (DAPA) program

ignored the law. Figures for each individual question appear later (Figures 6-9).

Figure 3 displays the results by question category and survey. Its interpretation is the same as

the interpretation of Figure 2, with the exception that revealed belief has been substituted for response

stability. The results are also broadly similar. The correspondence between stated certainty and revealed

beliefs is strongest on the general knowledge questions, and is modestly stronger for the MTurk sample

than in the Lucid sample: MTurk respondents who stated a probability of 1 revealed a belief of 0.94 in

this same response (top, middle-left), compared with 0.85 for Lucid responses (top, far-left). The slope

11

Page 13: Incorrect Knowledge or Miseducated Guesses? Evaluating

Figure 3: Revealed belief by certainty level.

Slope = 0.53 (0.04)

●●

●●

Slope = 0.42 (0.06)

● ●

●●

● ●

Slope = 0.75 (0.03)

● ●

●●

Slope = 0.51 (0.03)

●●

● ●

●●

Slope = 0.42 (0.05)

●●

●● ●

Slope = 0.55 (0.03)

●●

●● ●

Slope = 0.45 (0.03)

●● ●

Fiscal(Lucid, Feb. 2020)

Fiscal(MTurk, Mar. 2020)

Controversies(MTurk, Mar. 2020)

General knowledge(Lucid, Feb. 2020)

General knowledge(MTurk, Mar. 2020)

Economic conditions(Lucid, Feb. 2020)

Economic conditions(MTurk, Mar. 2020)

.5 .6 .7 .8 .9 1 .5 .6 .7 .8 .9 1 .5 .6 .7 .8 .9 1

.5 .6 .7 .8 .9 1 .5 .6 .7 .8 .9 1 .5 .6 .7 .8 .9 1 .5 .6 .7 .8 .9 1

0.5

0.6

0.7

0.8

0.9

1.0

0.5

0.6

0.7

0.8

0.9

1.0

0.5

0.6

0.7

0.8

0.9

1.0

0.5

0.6

0.7

0.8

0.9

1.0

0.5

0.6

0.7

0.8

0.9

1.0

0.5

0.6

0.7

0.8

0.9

1.0

0.5

0.6

0.7

0.8

0.9

1.0

Certainty

Revealedbelief in

initialresponse

Note: Conditional on the certainty level the respondent stated (x-axis), this figure displays the average probability re-spondents assigned to their response using the measure of revealed beliefs (y-axis). The dashed grey line indicates therelationship that would realize in the absence of measurement error. These measures are binned using the following groups:0.5; [0.51, 0.59]; [0.6, 0.69]; [0.7, 0.79]; [0.8, 0.89]; [0.9, 0.99]; 1.

statistics are printed above each panel.The existence of a reasonably strong correspondence between

respondents’ stated levels of certainty and the beliefs they revealed through costly choices suggests that

certainty scales capture meaningful variation in certainty. Relative to the certainty-accuracy relationship

depicted in the two rightmost panels of Figure 1, the certainty-stability relationship has a steadier upward

slope. The combination of these findings indicates that middling levels of certainty capture meaningful

variation in respondents’ confidence in misleading heuristics.

On the fiscal questions, stated certainty also predicts a substantial amount of variation in revealed

belief (bottom left). The results are again similar to general knowledge questions, with the notable

exception that respondents who claimed to be absolutely certain display a greater tendency to “back

off.” The average MTurk respondent who initially stated a probability of 1 revealed a belief of 0.85

in this same response, about 0.09 lower than the corresponding figure for general knowledge questions.

For the average Lucid respondent, the corresponding figure was 0.76, also about 0.09 lower than the

figure for general knowledge questions. Similar patterns are observed for the other two categories, with

approximately the same difference between the Lucid and MTurk samples.

12

Page 14: Incorrect Knowledge or Miseducated Guesses? Evaluating

How Certain Are Respondents Who Answer Incorrectly?

Having established that certainty scales measure certainty, the analysis now interrogates the stan-

dard analytic posture in survey-based research on misperceptions or misinformation, which treats correct

and incorrect answers as equally reflective of the respondents’ perceptions. This portion of the analysis

pools all 100,668 unique responses provided by the 12,414 respondents who participated in the nine

surveys,4 permitting statistically precise comparisons between respondents who answer correctly and

incorrectly within each of the five categories of questions considered in the analysis.5 To accommodate

differences between the certainty scales, all measures of certainty are rescaled to range between 0 and 100,

where 0 is the lowest point on the scale and 100 is the highest point. This measure can be interpreted

as a percentage of the width of the certainty scale.

For each of the five question categories, Figure 4 presents the average level of certainty for correct

and incorrect answers, as well as the difference between the two types of responses. General knowledge

questions serve as a useful benchmark for the other four categories. On average, respondents who

answered correctly had a confidence scale score of 70, equivalent to 0.85 on a 0.5 to 1 probability scale.

This means that on average, these respondents were about two-thirds of the way between a pure guess

and absolute certainty. By contrast, respondents who answered incorrectly had an average score of 40,

equivalent to 0.7 on a 0.5 to 1 scale. Incorrect answers were closer to pure guesses than to “incorrect

knowledge.” As a proportion of the scale, incorrect answers are stated with 30 percent greater uncertainty

than correct answers. Relative to one another, respondents who answered correctly were about 1.75

times as certain. In a rough sense, this validates researchers’ traditional assumption that correct answers

represent a mix of knowledge and ignorance, while incorrect answers tend to reflect uncertain guesses.

At the same time, it suggests that mental coin flips are a poor analogy for incorrect responses. 6

As noted above, researchers who suspect misperceptions or misinformation treat incorrect answers

as being fundamentally different than incorrect answers to general knowledge questions. To what extent

is this justified? On questions about social and economic statistics, respondents who answer incorrectly

had an average score of 45, about 5 percent of the scale more confident than incorrect answers to general

knowledge questions. Respondents who answered these questions correctly had an average score of

about 56 — 11 percent more confident than respondents who answer incorrectly, but about 14 percent

of the scale less confident than correct answers to general knowledge questions. Relative to general

knowledge questions, the most distinguishing feature of questions about economic and social statistics

is that respondents who answer correctly are relying more heavily on educated guesses, and less heavily

4These totals exclude respondents in the Sep. 2017 and July 2019 surveys who either did not have a certainty scale orwere allowed to say “don’t know.”

5The appendix shows that the same patterns obtain when the results are split by survey.6This is consistent with Schuman and Presser (1981), who find that even respondents who “provide an opinion on a

proposed law they know nothing about” are not “merely flipping mental coins . . . Respondents make an educated (thoughoften wrong) guess as to what the obscure acts represent, then answer reasonably in their own terms about the constructedobject” (158-59).

13

Page 15: Incorrect Knowledge or Miseducated Guesses? Evaluating

Figure 4: Average certainty by correct/incorrect answer and partisan valence.

Correct answersIncorrect answers

All respondents

47.5

49.1

45.4

45.1

39.2

70.2

63.9

55.7

56.6

69.9

−22.7 (0.7)

−14.7 (0.7)

−10.3 (0.5)

−11.6 (0.4)

−30.8 (0.5)

Difference (s.e.)

●●

●●

●●

●●Correct answer is congenialIncorrect answer is congenial

46.850.3

45.850.2

45.947.0

45.746.8

75.166.3

68.767.2

58.353.6

60.353.8

−28.3 (1.0)−16.0 (0.9)

−22.8 (1.3)−17.0 (1.3)

−12.4 (0.8) −6.7 (0.8)

−14.6 (0.7) −7.0 (0.6)

0 100

Lowestscalepoint

Midpoint Highestscalepoint

Controversies

Fiscal situation

Social conditions

Economic conditions

General knowledge

Controversies

Fiscal situation

Social conditions

Economic conditions

Average certainty

Cat

egor

y

Incorrect Correct

Note: Pooling across all surveys, this table displays average certainty among respondents who answered correctly andincorrectly. To standardize differences between the surveys, all of the certainty scales are rescaled to range from 0 to 100,where 0 is the lowest point and 100 is the highest point (see text). Triangles are the average for all respondents; squares,for correct answers; circles, for incorect answers. Hollow shapes are incorrect answers; solid shapes are correct answers.Differences calculated using OLS with standard errors clustered by respondent.

on knowledge.

Responses to questions about the government’s fiscal situation follow a similar pattern, but with

modestly higher confidence in both categories. The higher level of confidence is driven by the debt and

deficit questions, on which respondents’ open-ended descriptions of their answers suggest the availability

of fairly stable heuristics. The foreign aid questions perform more similarly to the questions about social

and economic conditions.

When it comes to questions about politicized controversies, the average respondent who answered

correctly had a certainty scale score of 70 — statistically indistinguishable from general knowledge ques-

tions but substantially higher than economic, social, and fiscal questions. Relative to these question

types, correct answers to question about politicized controversies are closer to knowledge. By contrast,

incorrect answers to questions about politicized controversies are offered with only slightly more confi-

dence than incorrect answers to other types of questions. The average scale score of 47.5 indicates 2 to

3 percent greater certainty than repondents who incorrectly answer social and economic questions, and

7 percent greater certainty than respondents who incorrectly answer general knowledge questions. As a

rule, there is nothing all that special about incorrect answers to questions about politicized controversies.

They are delievered with slightly higher confidence than incorrect answers to other question types, but

come nowhere close to representing “incorrect knowledge.”

14

Page 16: Incorrect Knowledge or Miseducated Guesses? Evaluating

Accounting for partisanship.

Researchers generally expect the strongest misperceptions among those who have a partisan moti-

vation to believe false or misleading claims. It could be that there is something systematically different

about incorrect answers that are congenial to the respondent’s partisanship. To examine this possibility,

the bottom portion of Figure 4 splits respondents according to the partisan valence of the correct answer.

during this period, unemployment was steadily declining under a Republican president. This is congenial

to Republicans and uncongenial to Democrats.

To the extent that partisan congeniality predicts differences in certainty about one’s answer, those

differences are more driven by convenient knowledge than by belief in convenient falsehoods. On questions

about politicized controversies, respondents who have a partisan incentive to answer incorrectly are about

3.5 scale points more confident in their incorrect answers than respondents who have an incentive to

answer incorrectly (50.3 versus 46.8). Among respondents who answered correctly, the corresponding

difference is 8.8 points (75.1 versus 66.3). For the social and economic questions, there is only about a

1 point difference among respondents who answer incorrectly, but a 5 point difference among those who

answer correctly.

Incorporating “don’t know” response options.

The foregoing analysis examined questions without a “don’t know” (DK) response option. Though

this is the standard practice in research on political misinformation (Luskin et al. 2018), it is worth

evaluating whether the usual approach to handling respondent uncertainty would change this picture.

To examine this question, the July 2019 survey included a split-ballot experiment designed to examine

the properties of DK response options.

The results of this experiment demonstrate that providing a DK option does little to narrow the

certainty gap between respondents who answer correctly and those who answer incorrectly (Figure 5).

Without a DK option, the difference in certainty between respondents who answer correctly and incor-

rectly is about 25 percent of the confidence scale. This shrinks to 22 percent of the scale when a DK

option is provided, about one eighth of the difference without a DK option.

The failure of DK response options to close much of the certainty gap between correct and in-

correct answers owes to DK options’ tendency to discourage only those respondents who are very

uncertain from answering the question. This has long been apparent to survey researchers, but in

this context, it aids interpretation to have a more precise sense of how confident DK respondents

would have been if they had provided an answer. Let E[C | ·] denote the conditional average cer-

tainty level among some group of respondents. Although E[C | said DK] is unobservable,7 it can be

7In principle, it would also be possible to ask DK respondents for their best guess, then how certain they were. This wasavoided for two reasons. First, it would require DK respondents to expend more effort than non-DK respondents, whichcould discourage DK responding. Second, the act of saying DK could affect responses to questions about confidence, justas Bishop et al. (1984) find that it affects self-reported political interest.

15

Page 17: Incorrect Knowledge or Miseducated Guesses? Evaluating

Figure 5: Average certainty by correct/incorrect answer and availability of DK option.

Correct answersIncorrect answers

DK allowed

DK not allowed

57

38

55

50

49

35

52

40

83

76

62

55

75

65

75

66

25.1 (1.7)

37.8 (1.5)

6.6 (1.1)

5.1 (1.0)

26.6 (1.0)

29.6 (0.9)

22.2 (0.8)

25.3 (0.7)

Difference (s.e.)Difference (s.e.)

0 10 20 30 40 50 60 70 80 90 100

Controversies

Economic/social

General knowledge

All questions

Lowestscalepoint

Midpoint Highestscalepoint

Average certainty

Note: Using data from the July 2019 survey, this figure displays average certainty among respondents who answeredcorrectly and incorrectly. For comparability with Figure 4, the certainty scale was rescaled to range from 0 to 100, where 0is the lowest point and 100 is the highest point. The original probabilistic scale ranged from 50 to 100. Hollow shapes areincorrect answers; solid shapes are correct answers. Squares represent the average among respondents who were randomlyassigned not to have a DK option; triangles, the average among respondents who had a DK option and chose to answerthe question. Differences calculated using OLS regression with standard errors clustered by respondent.

estimated under the assumption that people who do provide an answer are equally certain regard-

less of whether a DK option is present (that is, E[C | would not say DK if allowed, DK allowed] =

E[C | would not say DK if allowed, DK not allowed]). Then the average DK respondent’s confidence

level can be expressed as

E[C | DK not allowed]−(E[C | DK allowed]× (1− Pr[said DK | DK allowed])

)Pr[said DK | DK allowed]

, (4)

all components of which are observable (see the appendix for a proof). To estimate statistical uncertainty

about this estimator, I used the block bootstrap, which accounts for the fact that each respondent

contributed more than one observation (Hainmueller et al. 2015).

This formula suggests that the average respondent who said DK would have been much closer

to a pure guess than knowledge, but would not have been completely uncertain. Overall, the average

respondent who said DK would have had a certainty scale score of 20.0 (s.e. = 4.1), or about 0.60 on

the original 0.5 to 1 probability scale. This includes 16.9 on general knowledge questions (s.e. = 4.4),

20.6 on controversy questions (s.e. = 5.8), and 27.6 on social and economic questions (s.e. = 4.9). These

results are consistent with research that shows that DK responses hide only a small amount of knowledge

(Luskin and Bullock 2011; Sturgis et al. 2008).

With these estimates in hand, it is possible to be more precise about why DK options only mod-

estly shrink the gap between correct and incorrect responses. Allowing this fairly uncertain group of

respondents to opt out of answering the question boosts certainty among those who answer correctly by

about 9 percent of the scale, from 65.5 to 74.3. Because respondents who answer incorrectly tend to be

16

Page 18: Incorrect Knowledge or Miseducated Guesses? Evaluating

less certain to begin with, allowing the most uncertain respondents to opt out boosts certainty among

this group by a larger amount: 13 percent of the scale, from 40.9 to 53.7. Permitting DK responses moves

both types of answers a little closer to absolute certainty, but plenty of uncertainty remains among those

who answer the question.

Certain and Wrong?

So far, the analysis has shown that certainty scales capture meaningful variation in certainty and

that incorrect answers do not reflect “incorrect knowledge” to the same degree that correct answers

reflect true knowledge, regardless of whether a DK option is provided. Though these particular results

are novel, scholars have long recognized that not all incorrect answers are stated with certainty, and have

consequently recommended the use of certainty scales to distinguish the misinformed from the ignorant

(e.g., Kuklinski et al. 2000; Pasek et al. 2015; Graham 2020a). Does this practice isolate respondents who

have been misinformed about the facts themselves, or merely identify a group whose mistaken inferences

were made with greater confidence?

To examine this question, I compare correct and incorrect answers using two of the same outcome

metrics — response stability and revealed belief — that were used above in the undifferentiated assess-

ment of whether certainty scales measure uncertainty. To the extent that one’s claim to be certain of

the incorrect answer reveals “incorrect knowledge,” respondents should stick with their initial response

upon a second measurement, at least to the same degree as respondents who answered correctly. But if

respondents who claim to be certain of incorrect answers display a greater tendency to “back off” their

initial response, claims to be certain and incorrect disproportionately reflect mistaken inferences.

Whereas the previous two sections pooled across questions to draw out general rules of thumb, this

section zooms in on specific questions and analyzes the heterogeneity between them. It draws primarily

on the revealed belief metric collected in the February and March 2020 surveys, which as noted above,

provides the best validity as a distinct measure of beliefs and offers the best protection against learning

and information search.

Beginning with the usual general knowledge benchmark, Figure 6 examines four questions. On

each one, a striking difference between correct and incorrect answers emerges: while certainty is a strong

predictor of revealed beliefs for respondents who answer correctly, it is a weak predictor of revealed

beliefs for those who answer incorrectly. Visually, this is evident from the green circles’ tendency to sit

above the red squares, especially at high levels of certainty. The pattern is most striking on the question

about John Roberts’ job, which asked respondents to say whether he is the Chief Justice of the Supreme

Court or the Secretary of Defense. When asked to make this inference a second time, respondents who

initially claim to be certain that he is the Chief Justice reveal a highly certain belief that this is true. By

contrast, respondents who say that he is Secretary of Defense tend to reveal that their beliefs are really

17

Page 19: Incorrect Knowledge or Miseducated Guesses? Evaluating

Figure 6: Revealed belief in initial answers by certainty level, general knowledge questions.

●● ●

●● ●

Correct: 0.70 (0.04)Incorrect: −0.04 (0.10)

● ●●

●●

● ●●

●●

Correct: 0.57 (0.04)Incorrect: 0.20 (0.08)

● ●●

● ●●

Correct: 0.69 (0.06)Incorrect: 0.23 (0.09)

●●

●●

●●

●●

Correct: 0.50 (0.08)Incorrect: 0.05 (0.13)

John Roberts isChief Justice

Jerome Powell isFed Chair

Democrats holdHouse majority

Reconciliationvs. filibuster

.5 .6 .7 .8 .9 1 .5 .6 .7 .8 .9 1 .5 .6 .7 .8 .9 1 .5 .6 .7 .8 .9 1

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Certainty

Revealedbelief in

initialresponse

Initial responsewas...

●● Correct

Incorrect

split about 50/50 between the two response options, regardless of how certain they claimed to be on the

initial measure.

On the other three general knowledge questions, the same asymmetry emerges, but respondents

display a modestly stronger tendency to stick with their incorect answers. Among those who incorrectly

say that Jerome Powell is the Treasury Secretary (rather than the Chairman of the Federal Reserve),

there is a slight tendency among the more-certain to stick with their answer, suggesting that at least

some people drew on the same misleading heuristics when making the same inference a second time.

The same goes for respondents who incorrectly say that Republicans have the majority in the House of

Representatives (rather than Democrats) and respondents who incorrectly say that the filibuster allows

changes to budget bills via a simple majority vote (rather than reconciliation). Some respondents may

be consistently misled by their hazy knowledge that Republicans currently control much of the federal

government, or that the filibuster has something to do with majorities and the Senate, but those who

claim to be certain reveal themselves not to be misinformed about the facts.

Although no reader likely expects many people to be truly certain that the wrong party controls

the House of Representatives, respondents’ tendency to stick with this claim becomes a useful benchmark

for the other results. The probability that allegedly-certain respondents assign to this possibility on a

second measurement, 0.75, is close to the high water mark for respondents’ true level of attachment to

their claims to be certain of falsehoods.

Turning next to questions about economic statistics (Figure 7), respondents display a similar

tendency to back off their incorrect answers, but a weaker tendency to stick with their correct answers.

Claims to be certain seemed most meaningful on the question that asked respondents whether GDP

growth had been above or below 4 percent over the past year. This question was also the only one of the

five with clear misinformation in media: although neither annual or quarterly GDP growth reached 4

percent at any point during Trump’s presidency, Republican boosters of the 2017 Tax Cuts and Jobs Act

predicted that growth would exceed this threshold, and widely-covered preliminary estimates of 2018’s

18

Page 20: Incorrect Knowledge or Miseducated Guesses? Evaluating

Figure 7: Revealed belief in initial answers by certainty level, economic/social questions.

●●

●●

Correct: 0.51 (0.05)Incorrect: 0.11 (0.10)

● ●●

●●

● ●●

●●

Correct: 0.50 (0.05)Incorrect: 0.03 (0.06)

●● ●

●● ●

●● ●

●● ●

Correct: 0.56 (0.05)Incorrect: 0.31 (0.06)

Unemploymentdeclined

Inflation belowaverage

GDP growthbelow 4%

.5 .6 .7 .8 .9 1 .5 .6 .7 .8 .9 1 .5 .6 .7 .8 .9 1

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Certainty

Revealedbelief in

initialresponse

Initial responsewas...

●● Correct

Incorrect

second quarter growth pegged the figure at 4.1 percent (later revised to 3.5). At levels of stated certainty

between 0.5 and 0.99, respondents who answered correctly and incorrectly are about equally stable. Yet

among the most certain, a clear divergence emerges: respondents who answered correctly revealed a

belief of 0.87 that GDP growth was below 4 percent, while those who answered incorrectly revealed a

belief of 0.72 that it was above 4 percent. On this question, responses seem to be mostly inferences based

on general perceptions about the economy, with a smattering of correct knowledge.

The other two economic questions cover trends in unemployment and inflation, the same statistics

examined in Bartels’ (2002) influential study. On these questions, incorrect answers seem to consistently

reflect mistaken inferences. Among those who answered incorrectly claimed to be absolutely certain, the

average revealed belief never exceeds 0.65 — a full 0.10, or 20 percent of the scale, less certain than

respondents who claimed to be certain that Republicans control the House of Representatives. Correct

answers show a more variable pattern. When it comes to the unemployment rate and the rate of inflation,

respondents seem to place a fair bit of trust in their heuristics. Respondents who claim to be absolutely

certain back off to a greater extent than those who say they know that Roberts is the Chief Justice or

that Democrats control the House, but they are at least fairly sure, based on what they do know about

economic trends, that unemployment declined and that inflation has been below the postwar average

in recent years. The typical claim to be certain of the incorrect answer to a question about economic

statistics is clearly reflective of a miseducated guess, not incorrect knowledge.

Turning next to fiscal questions (Figure 8), respondents who answer incorrectly display a stronger

tendency to stick with their initial level of certainty, suggesting that misleading heuristics play a larger

role on these questions than they do on the economic questions. Few respondents incorrectly answer that

the budget deficit decreased between the 2018 and 2019 fiscal years, but those who do are only modestly

less attached to this answer than respondents who answer correctly. This result notwithstanding, if one

were to hunt for an exception to the rule that correct answers are more stable than incorret answers

conditional on certainty, repeating the deficit question in a good year would be a good place to start.

19

Page 21: Incorrect Knowledge or Miseducated Guesses? Evaluating

Figure 8: Belief in initial answers by certainty level, fiscal questions.

(a) Revealed belief (Feb. and Mar. 2020)

● ●●

● ●

● ●●

● ●

Correct: 0.49 (0.04)Incorrect: 0.31 (0.10)

●● ●

●● ●

Correct: 0.59 (0.05)Incorrect: 0.28 (0.05)

Budget deficitincreased

Foreign aid $< Medicare

.5 .6 .7 .8 .9 1 .5 .6 .7 .8 .9 1

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Certainty

Revealedbelief in

initialresponse

(b) Response stability (Oct. 2019-Nov. 2019)

● ●●

● ●●

Correct: 0.97 (0.10)Incorrect: 0.24 (0.14)

.5 .6 .7 .8 .9 1

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Certainty, wave 1

Belief ininitial

response,wave 2

Initial responsewas...

●● Correct

Incorrect

Foreign aid $< Medicare

Graham (2020a) reports a negative relationship between confidence and accuracy among both Democrats

and Republicans for the trend in the budget deficit under President Obama.

The question about foreign aid spending relative to Medicare is a two-choice version of the ANES

question on the same subject. This question was also included in the October-November 2019 panel

survey. At levels of certainty between 0.5 and 0.99, respondents who incorrectly said that spending is

higher on foreign aid stick with their answer to about the same extent as respondents who correctly said

that Medicare spending is higher. These respondents appear to largely be relying on heuristics, some of

which are misleading. Among those who said they were absolutely certain, a substantial gap emerges.

Whereas some respondents appear genuinely certain in their knowledge that foreign aid spending is lower

than Medicare spending, respondents who claim to be certain that Medicare spending is higher do not

stick to this claim. Many respondents are clearly mistaken in their perceptions of foreign aid, but claims

to be absolutely certain that more is spent on foreign aid are largely illusory.

Among the categories of questions considered, misinformation research provides the strongest rea-

son to expect respondents to stick to their incorrect responses on questions about politicized controversies.

Across the surveys, six unique questions about politicized controversies featured a follow-up measure.

Here, five of these are examined. The question about Hillary Clinton’s private email server is excluded

because too few people got it wrong.

In the controversy concerning false rumors that Barack Obama was born outside the United States,

an oft-repeated false claim asserts that Obama has not released his birth certificate. On this question,

respondents who answer correctly provide trustworthy reports of certainty: those who claim to be totally

uncertain assigned about the same probability to their answer when asked a year later, while those who

reported being certain that Obama had released his birth certificate assigned a probability of about 0.9

to their initial response (Figure 9b). By contrast, certainty was not all that predictive of subsequently-

measured beliefs among those who answered incorrectly. Even those who claimed to be absolutely certain

20

Page 22: Incorrect Knowledge or Miseducated Guesses? Evaluating

Figure 9: Belief in initial answers by certainty level, controversy questions.

(a) Revealed belief (Mar. 2020)

●●

● ●

●●

● ●

Correct: 0.67 (0.06)Incorrect: 0.22 (0.09)

● ●

● ●

● ●

● ●

Correct: 0.76 (0.09)Incorrect: 0.34 (0.09)

●●

● ●●

●●

● ●●

Correct: 0.43 (0.06)Incorrect: 0.42 (0.08)

● ● ●● ●

● ● ●● ●

Correct: 0.27 (0.08)Incorrect: 0.22 (0.06)

Mueller didn't sayTrump colluded

with Russia

Trump recordedsaying 'grab them'

Trump claimedunlimited Article

II power

Obama saidDAPA would

ignore the law

.5 .6 .7 .8 .9 1 .5 .6 .7 .8 .9 1 .5 .6 .7 .8 .9 1 .5 .6 .7 .8 .9 1

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Certainty

Revealedbelief in

initialresponse

(b) Response stability (June 2019-2020)

● ●

● ●

● ●

● ●

Correct: 0.54 (0.13)Incorrect: 0.03 (0.22)

.5 .6 .7 .8 .9 1

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Certainty, wave 1

Belief ininitial

response,wave 2

Initial responsewas...

●● Correct

Incorrect

Obama releasedbirth certificate

that Obama had released his birth cerficate backed off substantially, assigning an average probability of

0.58 to this possibility.

Among the other four controversies, the two most prominent relate to President Trump. Prior to

the release of special counsel Robert Mueller’s report on Russian interference in the 2016 presidential

election, left-leaning pundits confidently claimed that Mueller would surely find evidence of personal

collusion between Trump and Russia. Mueller’s report contained no such evidence. Among respondents

who correctly chose this answer, a high degree of correspondence emerged between stated certainty and

revealed beliefs; those who initially claimed to be absolutely certain revealed that they assign a probability

of 0.9 to this answer (Figure 9a, far left). Respondents who incorrectly answered that Mueller’s report

had contained such evidence displayed a weaker tendency to stick to their answer. Those who claimed

to be absolutely certain that Mueller had reported such evidence revealed a probability of 0.73. In the

June 2019-June 2020 panel survey, the corresponding figure was 0.52 (see the appendix).

The other prominent Trump controversy, concerning his tape recorded boast that he kisses women

and grabs them between the legs without their consent. After first acknowledging the recording’s au-

thenticity, Trump later claimed that it was faked. This question yielded a similar pattern as the other

two (Figure 9a, middle left). At low and middling levels of certainty, respondents who answered correctly

and incorrectly display some tendency to stick with their answer, indicating that respondents who claim

to be more certain are drawing on some sort of consistent basis for their inference. Among the absolutely

certain, a substantial gap emerges: those who answered correctly reveal a belief of 0.92 in their initial

answer, while those who answered incorrectly reveal a belief of about 0.77. As a rule, respondents who

answer incorrectly and claim to be certain have either not heard or not fully bought into Trump’s claim

that the recording was faked.

The remaining two controversies are less prominent matters relating to executive aggrandizement

21

Page 23: Incorrect Knowledge or Miseducated Guesses? Evaluating

under Trump and Obama. In a September 2013 interview, Obama argued that extending the protections

under Deferred Action for Childhood Arrivals (DACA) to protect unauthorized immigrants with U.S.

citizen children would amount to “ignoring the law.” Before taking precisely this action in November

2014, Obama falsely claimed not to have changed his position. Although some respondents claimed

modest or high levels of certainty about their answers to this question, the revealed belief measure

made clear that few respondents knew the facts of the controversy. The relationship between certainty

and revealed beliefs crests in a revealed belief of about 0.75 among respondents who had claimed to

be absolutely certain, regardless of whether their answer was correct or incorrect. As with the other

question types, when the gap between correct and incorrect answers disappears, it is not because people

strongly believe their incorrect answers. Instead, it is because almost nobody actually knows the correct

answer.

In contrast to Obama’s uneasy embrace of executive aggrandizement, Trump has made no secret

of his view that Article II of the Constitution grants him the power to do “whatever I want.” When

asked whether Trump has made such a claim, respondents’ claims to be certain do seem to indicate some

underlying heuristics to which they give genuine credence. Those who claim to be absolutely certain

of the incorrect answer later reveal that they assign a probability of 0.78 to this possibility — a higher

point estimate than any other question considered in this paper, but still indistinguishable, statistically

and substantively, from claims to be certain that the wrong party controls the House of Representatives.

What explains respondents’ greater tendency to back off their incorrect answers? Existing research

on political misperceptions focuses squarely on one form of measurement error: respondents’ tendency

to “cheerlead” by selecting responses that are favorable to their party (Bullock et al. 2015; Peterson and

Iyengar 2020; Prior et al. 2015; Schaffner and Luks 2018). Though cheerleading could sometimes influence

the estimates presented here, it cannot explain why the same pattern would appear on questions without

any clear partisan valence. Some other explanation is needed to account for the systematic difference

between correct and incorrect answers. The next section provides one.

Interpretation

This section uses a simple parametric model to clarify how it could be that certainty scales capture

meaningful variation in certainty, but fall short of identifying genuine certainty about incorrect answers.

Two elements enable the simulations to capture the results: (1) the prevalence of genuine certainty in

respondents’ latent, “true” beliefs and (2) unsystematic error in the measure of beliefs.

Suppose that each survey question has two response options and that each respondent has some

latent, long-run response tendency p∗i , representing the average belief the respondent would form if

asked the same survey question repeatedly. In any particular instance, respondent i forms the belief

pit ≡ p∗i + εit, where pit is the probability assigned to the correct answer and εit captures measurement

22

Page 24: Incorrect Knowledge or Miseducated Guesses? Evaluating

Figure 10: Simulation results.

(a) Distribution of latent probabilistic beliefs.

Best guessis incorrect

Best guessis correct

Knowledge &false facts

Knowledge &mistaken inferences Inferences only

0 0.5 1 0 0.5 1 0 0.5 1

Probabilistic belief

Density

(b) Response stability by in/correct answer, conditional on certainty.

Knowledge &false facts

Knowledge &mistaken inferences Inferences only

0.5 1 0.5 1 0.5 1

0.50

0.75

1.00

Certainty, initial measure

Beliefin initial

response,follow−upmeasure

Correct Incorrect

error. In the spirit of Zaller (1992), the simulations assume that i’s considerations are centered at the log-

odds of their true probabilistic belief, log(p∗i

1−p∗i). Each i draws from their considerations with error, then

uses the inverse logit function to format their considerations into a spontaneously-formed probabilistic

belief between 0 and 1.8 The simulations consider three population distributions of p∗i , each of which

corresponds to the presence or absence of certain values of p∗i in the population.

The first scenario, knowledge and false facts, assumes that the latent distribution of beliefs covers

the full spectrum from true knowledge (p∗i = 1) to belief in outright false information (p∗i = 0). The

left panel of Figure 10a displays the latent distribution of beliefs, and the bottom-left panel displays

the results of the response stability and revealed belief tests presented above. When absolute certainty

about incorrect answers really exists, both types of respondents stick with their initial response to an

equal degree (Figure 10b, left panel). Unsystematic measurement error in the scale nonetheless produces

some regression to the mean. In the results above, no question closely resembles the knowledge and false

facts scenario.

The second scenario, knowledge and mistaken inferences, is most typical of the results presented

above. In this scenario, the population distribution of p∗i spans from 0.2 to 1.0 (Figure 10a, middle

panel). This means that while some are certain of the correct answer, nobody has an outright false

8Formatting one’s response is an explicit step in Tourangeau and colleagues’ (2000) account of the survey response.

23

Page 25: Incorrect Knowledge or Miseducated Guesses? Evaluating

belief about the incorrect answer. Despite this, some respondents’ noisy draws of their considerations

lead to a fleeting sense of certainty at any point in time. In the second measurement, respondents who

initially claimed to be certain about their incorrect answer assign a lower probability to their response

than did respondents who claimed to be certain about their correct answer (Figure 10b, middle panel).

Nevertheless, because some respondents’ considerations are genuinely misleading, there is a positive

relationship between certainty and belief stability among those who answer incorrectly.

The third and final scenario, inferences only, is similar to the pattern that appeared on the more

obscure economic statistics and controversies. In this scenario, the population distribution of p∗i spans

from 0.2 to 0.8 (Figure 10a, right panel). This means that nobody’s underlying perceptions are dispositive

enough for them to be completely certain of either the correct or incorrect answer. In this scenario,

respondents again display an equal tendency to back off their initial response, regardless of whether they

answered correctly or incorrectly (Figure 10b, right panel).

In the results above, most of the questions fall somewhere along the spectrum between the latter

two scenarios. On many questions, respondents who answer correctly and claim to be certain stick with

their answer to the same degree as the simulated respondents in the scenarios with true knowledge. By

contrast, the average respondent who answers incorrectly and claims to be certain never sticks with their

answer to the same degree as equally-certain respondents who answer correctly. Such respondents put

more stock in their miseducated guesses than respondents who admit to complete uncertainty, but do

not possess incorrect knowledge.

Viewed in this light, the following generalizations can be drawn out with respect to the individual

question categories:

• General knowledge questions are generally consistent with the “knowledge only” scenario. However,respondents who answer incorrectly do sometimes have a consistent basis to mistakenly infer thatthe wrong answer is correct.

• Economic, social, and fiscal questions are generally consistent with the “inferences only” scenario,with a smattering of correct knowledge.

• Correct answers to controversy questions often represent correct knowledge, but incorrect answersgenerally represent mistaken inferences.

Implications

As concern about political misinformation surged over the past few decades, a new purpose was

demanded of old measurement technology. Unlike classic political knowledge questions, on which mis-

information is implausible and unlikely to have significant behavioral consequences, the degree to which

the public holds false beliefs about politicized controversies and economic performance really matters.

Unfortunately, too little research has heeded Kuklinski and colleagues’ warning to be mindful of the

difference between the misinformed and the uninformed, and what research has heeded this warning has

24

Page 26: Incorrect Knowledge or Miseducated Guesses? Evaluating

not gone far enough to check whether the recommended strategy — focusing on those who claim to

be certain — actually succeeds. In demonstrating systematic differences in the degree to which correct

and incorrect answers capture respondents’ perceptions, this paper provided important evidence that

survey-based studies of political misperceptions have systematically exaggerated the breadth and depth

at which the public believes in political misinformation.

The most important limitation of the evidence presented here is generalizability. Though a wide

variety of topics were considered, a fair amount of survey space was devoted to general knowledge

questions, which primarily serve as a benchmark for the other topics, and questions about economic and

social statistics that are the focus of influential work in this area (Kuklinski et al. 2000; Bartels 2002).

Moreover, the performance of the scales examined here may not be the ceiling; with different scale

features or more extensive training, perhaps a smaller subset of certain and incorrect respondents could

be isolated with greater precision. Though evidence presented here was not broad enough to establish

its results as a universal pattern, but it was broad enough to clarify that absent concrete evidence to

the contrary, consumers of survey data should assume that incorrect answers are miseducated guesses.

Researchers have good reasons to be suspicious of stronger interpretations of incorrect answers, and those

who wish to justify such interpretations can use this paper’s approach as a template for shouldering the

burden of proof.

This view of incorrect answers to factual questions has broad implications for research on misin-

formation’s origins and consequences. Studies that examine the correlates of endorsing false statements

in surveys frequently attribute apparent misperceptions, or susceptibility to false claims, to larger un-

derlying dispositions like strength of partisanship or conspiracy-mindedness (). Without disputing the

observed correlations between these various types of survey responses, the results presented here suggest

an alternative interpretation: people with these dispositions may be more likely to reason that false

statements they have never thought about are more likely than not to be true. Given the current state

of the evidence, researchers who describe incorrect answers as “endorsements” rather than beliefs (e.g.,

Oliver and Wood 2014) adopt the more defensible posture. Survey data provide a reasonably sound basis

to evaluate what predispositions might lead someone to agree with a particular claim if they encountered

it in the real world, but do not provide a sound basis to conclude that this has already happened.

The results also offer lessons for research on correcting misperceptions. While many have noted

a strong theoretical expectation that more-certain misperceptions should be harder to correct, the few

studies that are equipped to test whether certainty predicts resistance to correction have either found

no heterogeneity (Thorson 2015b) or have not reported a test for such heterogeneity (Kuklinski et al.

2000). This paper showed that an important barrier to testing this prediction is identifying respondents

who are actually certain of the falsehood to begin with.

Often, studies that correct alleged misperceptions are interested in the effect of factual beliefs on

25

Page 27: Incorrect Knowledge or Miseducated Guesses? Evaluating

attitudes. Findings in this literature are mixed: whereas some studies find that providing respondents

with correct numerical information changes their attitudes (Ahler and Broockman 2018; Gilens 2001;

Grigorieff et al. 2018), others find no such evidence (Hopkins et al. 2019; Kuklinski et al. 2000; Lawrence

and Sides 2014). Research in this area has not rigorously examined the possibility that the responses

interpreted as baseline “misperceptions” were virtually meaningless to the respondent. It may not be

a coincidence that failed treatments often focus on numerical statistics, which respondents have trouble

connecting to their perceptions absent a concerted effort to provide context (Ansolabehere et al. 2013).

The provision of correct information may not be better-interpreted as an information treatment than

as a correction. Baseline correlations between errant guesses and attitudes may merely be evidence

that survey responses, not misperceptions, are “a consequence, rather than a cause, of attitudes toward

those groups” (Hopkins et al. 2019, abstract). Absent evidence of the degree to which respondents’

incorrect guesses actually reflect underlying perceptions — which no study in this area provides — the

interpretation of factual information treatments, as well as the implications of their failure to move

attitudes, are unclear.

Perhaps the most important implication of this paper’s findings concerns the social impact of survey

research. While scholars vary in their commitment to judicious interpretation of survey evidence, public-

facing journalists and pollsters are nearly unanimous in their interpretation that “[p]eople have decided

beliefs, and surveys elicit those beliefs” (Bullock and Lenz 2019, 326). Outside of the occasional generic

reference to Converse (1964), Zaller (1992), or Tourangeau et al. (2000), survey researchers have produced

little concrete basis for pushback against these misleading interpretations of polling data. Consequently,

even as misinformation research has done a great deal to advance public concern about misinformation,

it has overlooked is own contribution to widespread public-facing misinformation about the degree to

which members of the public are misinformed. Even researchers who do not intend “misinformed” to be

a literal description cannot assume that readers are on the same page.

Until concerete evidence emerges that some question or survey techonology reliably captures false

beliefs about the facts themselves, researchers who wish to meet their obligation to provide accurate

information to their audience should strive to be judicious in their interpretation of incorrect answers to

factual questions. Political misinformation is a serious problem, and taking it seriously requires serious

attention to the alignment between measured quantities and the phenomenon of interest.

26

Page 28: Incorrect Knowledge or Miseducated Guesses? Evaluating

References

Ahler, Douglas J. and David E. Broockman. 2018. “The Delegate Paradox: Why Polarized PoliticiansCan Represent Citizens Best.” The Journal of Politics .

Ansolabehere, Stephen, Jonathan Rodden and James M. Snyder. 2008. “The strength of issues: Usingmultiple measures to gauge preference stability, ideological constraint, and issue voting.” AmericanPolitical Science Review 102(2):215–232.

Ansolabehere, Stephen, M. Meredith and Erik Snowberg. 2013. “Asking About Numbers: Why andHow.” Political Analysis 21(1):48–69.

Bartels, Larry M. 2002. “Beyond the Running Tally: Partisan Bias in Political Perceptions.” PoliticalBehavior 24(2):117–150.

Berinsky, Adam J. 2017. “Rumors and Health Care Reform: Experiments in Political Misinformation.”British Journal of Political Science 47(2):241–262.

Berinsky, Adam J. 2018. “Telling the Truth about Believing the Lies? Evidence for the Limited Preva-lence of Expressive Survey Responding.” The Journal of Politics 80(1):211–224.

Bishop, George F., Robert W. Oldendick and Alfred J. Tuchfarber. 1984. “What Must My Interest inPolitics Be If I Just Told You ’I Don’t Know’?” The Public Opinion Quarterly 48(2):510–519.

Bullock, John G, Alan S Gerber, Seth J Hill and Gregory A Huber. 2015. “Partisan Bias in FactualBeliefs about Politics.” Quarterly Journal of Political Science 10:1–60.

Bullock, John G and Gabriel Lenz. 2019. “Partisan Bias in Surveys.” Annual Review of Political Science22:325–342.

Clifford, Scott and Jennifer Jerit. 2016. “Cheating on Political Knowledge Questions in Online Surveys:An Assessment of the Problem and Solutions.” Public Opinion Quarterly 80(4):858–887.

Converse, Philip E. 1964. The nature of belief systems in mass publics. In Ideology and Discontent,edited by D. Apter. New York: Free Press.

Converse, Philip E. 1970. Attitudes and Non-Attitudes: Continuation of a Dialogue. In The QuantitativeAnalysis of Social Problems, edited by Edward R. Tufte. Reading, MA: Addison-Wesley pp. 168–189.

Delli Carpini, Michael X. and Scott Keeter. 1996. What Americans Know About Politics and Why itMatters. New Haven: Yale University Press.

Flynn, D. J., Brendan Nyhan and Jason Reifler. 2017. “The Nature and Origins of Misperceptions:Understanding False and Unsupported Beliefs About Politics.” Political Psychology 38(682758):127–150.

Flynn, D.J. 2016. “The Scope and Correlates of Political Misperceptions in the Mass Public.” Unpublishedmanuscript .

Gilens, Martin. 2001. “Political Ignorance and Collective Policy Preferences.” American Political ScienceReview 95(2):379–396.

Graham, Matthew H. 2020a. “Self-Awareness of Political Knowledge.” Political Behavior 42:305–326.

Graham, Matthew H. 2020b. “Two Sources of Bias in Measures of Partisan Differences in Factual Beliefs.”Unpublished Manuscript .

Grigorieff, Alexis, Christopher Roth and Diego Ubfal. 2018. “Does Information Change Attitudes To-wards Immigrants? Representative Evidence from Survey Experiments.” SSRN .

Hainmueller, Jens, Daniel J Hopkins and Teppei Yamamoto. 2015. “Causal Inference in Conjoint Analy-sis: Understanding Multi-Dimensional Choices via Stated Preference Experiments.” Poltical Analysis22(1):1–30.

Hill, Seth J. 2017. “Learning Together Slowly: Bayesian Learning about Political Facts.” The Journalof Politics 79(4):1403–1418.

27

Page 29: Incorrect Knowledge or Miseducated Guesses? Evaluating

Hochschild, Jennifer L. and Katherine Levine Einstein. 2015. Do Facts Matter? Norman: University ofOklahoma Press.

Hofstetter, C. Richard, David C. Barker, James T. Smith, Gina M. Zari and Thomas A. Ingrassia. 1999.“Information, Misinformation, and Political Talk Radio.” Political Research Quarterly 52(2):353–369.

Holt, Charles A. and Angela M. Smith. 2016. “Belief Elicitation with a Synchronized Lottery ChoiceMenu That Is Invariant to Risk Attitudes.” American Economic Journal: Microeconomics 8(1):110–39.

Hopkins, Daniel J., John Sides and Jack Citrin. 2019. “The muted consequences of correct informationabout immigration.” Journal of Politics 81(1):315–320.

Jerit, Jennifer, Tine Paulsen and Joshua Aaron Tucker. 2020. “Confident and Skeptical: What ScienceMisinformation Patterns Can Teach Us About the COVID-19 Pandemic.” SSRN Electronic Journalpp. 1–26.

Kuklinski, James H, Paul J Quirk, David W Schwieder and Robert F Rich. 1998. “‘Just the Facts,Ma’am’: Political Facts and Public Opinion Source.” The Annals of the American Academy of Politicaland Social Science 560(November):143–154.

Kuklinski, James H, Paul J Quirk, Jennifer Jerit, David Schwieder and Robert F Rich. 2000. “Misinfor-mation and the currency of democratic citizenship.” The Journal of Politics 62(3):790–816.

Lawrence, Eric D. and John Sides. 2014. “The consequences of political innumeracy.” Research & Politicspp. 1–8.

Lazer, David M. J., Matthew A. Baum, Yochai Benkler, Adam J. Berinsky, Kelly M. Greenhill, Fil-ippo Menczer, Miriam J. Metzger, Brendan Nyhan, Gordon Pennycook, David Rothschild, MichaelSchudson, Steven A. Sloman, Cass R. Sunstein, Emily A. Thorson, Duncan J. Watts, Jonathan L.Zittrain, David M. J. Lazer and Jonathan L. Zittrain. 2018. “The science of fake news.” Science359(6380):1094–1096.

Lee, Seonghui and Akitaka Matsuo. 2018. “Decomposing political knowledge: What is confidence inknowledge and why it matters.” Electoral Studies 51(1):1–13.

Lewandowsky, Stephan, Ullrich K. H. Ecker, Colleen M. Seifert, Norbert Schwarz and John Cook. 2012.“Misinformation and Its Correction.” Psychological Science in the Public Interest 13(3):106–131.

Lewandowsky, Stephan, Werner G.K. Stritzke, Klaus Oberauer and Michael Morales. 2005. “Memoryfor Fact, Fiction, and Misinformation: The Iraq War 2003.” Psychological Science 16(3):190–195.

Luskin, Robert C., Guarav Sood and Joshua Blank. 2018. “Misinformation about Misinformation: OfHeadlines and Survey Design.” Unpublished Manuscript .

Luskin, Robert C and John G Bullock. 2011. “‘Don’t Know’ Means ‘Don’t Know’: DK Responses andthe Public’s Level of Political Knowledge.” Journal of Politics .

Marietta, Morgan and David C. Barker. 2019. One Nation, Two Realities: Dueling Facts in AmericanDemocracy. Oxford University Press.

Nadeau, Richard and Richard G Niemi. 1995. “Knowledge Questions in Surveys.” Public Opinion Quar-terly 59:323–346.

Oliver, J. Eric and Thomas J. Wood. 2014. “Conspiracy Theories and the Paranoid Style of MassOpinion.” American Journal of Political Science 58(4):952–966.

Pasek, Josh, Gaurav Sood and Jon A. Krosnick. 2015. “Misinformed About the Affordable CareAct? Leveraging Certainty to Assess the Prevalence of Misperceptions.” Journal of Communication65(4):660–673.

Peterson, Erik and Shanto Iyengar. 2020. “Partisan Gaps in Political Information and Information-Seeking Behavior: Motivated Reasoning or Cheerleading.” The American Journal of Political Science(Forthcoming).

28

Page 30: Incorrect Knowledge or Miseducated Guesses? Evaluating

Prior, Markus, Gaurav Sood and Kabir Khanna. 2015. “You Cannot be Serious. The Impact of AccuracyIncentives on Partisan Bias in Reports of Economic Perceptions.” Quarterly Journal of Political Science10(July):489–518.

Schaffner, Brian F. and Samantha Luks. 2018. “Misinformation or Expressive Responding?” PublicOpinion Quarterly 82(1):135–147.

Schuman, Howard and Stanley Presser. 1981. Questions and Answers in Attitude Surveys. San Diego:SAGE Publications.

Strack, Fritz and Leonard L Martin. 1987. Thingking, Judging, and Communicationg: A Process Accountof Context Effects in Attitudes Surveys. In Social information processing and survey methodology,edited by Hans J. Hippler, Norbert Schwarz and Seymour Sudman. New York: Springer-Verlagpp. 123–148.

Sturgis, Patrick, Nick Allum and Patten Smith. 2008. “An experiment on the measurement of politicalknowledge in surveys.” Public Opinion Quarterly 72(1):90–102.

Swire, Briony, Adam J. Berinsky, Stephan Lewandowsky and Ullrich K. H. Ecker. 2017. “Processingpolitical misinformation: comprehending the Trump phenomenon.” Royal Society Open Science 4:1–21.

Thorson, Emily. 2015a. “Belief Echoes: The Persistent Effects of Corrected Misinformation.” PoliticalCommunication 4609(November):1–21.

Thorson, Emily A. 2015b. “Identifying and Correcting Policy Misperceptions.” Unpublished Manuscript,George Washington University .

Tourangeau, Robert, Lance J. Rips and Kenneth Rasinski. 2000. The Psychology of Survey Response.Cambridge University Press.

Zaller, John. 1992. The Nature and Origins of Mass Opinion. Cambridge: Cambridge University Press.

Zaller, John and Stanley Feldman. 1992. “A Simple Theory of the Survey Response : Answering Ques-tions versus Revealing Preferences.” American Journal of Political Science 36(3):579–616.

29