84
Running head: The seduction of easiness The seduction of easiness: How science depictions influence laypeople’s reliance on their own evaluation of scientific information 1 Lisa Scharrer a* , Rainer Bromme a , M. Anne Britt b & Marc Stadtler a a Department of Psychology, University of Muenster, Fliednerstrasse 21, 48149 Muenster, Germany, Email: [email protected]; [email protected]; [email protected] 1 Part of this work has been published as a contribution to the 33rd annual meeting of the Cognitive Science Society, 2011, Boston, USA. 5 10 15

 · Web viewTo determine the statistical power achieved when analyzing the data with repeated measures F-Tests, a post hoc power analyses using G*Power (Faul, Erdfelder, Lang & Buchner,

  • Upload
    lethien

  • View
    217

  • Download
    2

Embed Size (px)

Citation preview

Running head: The seduction of easiness

The seduction of easiness: How science depictions influence laypeople’s

reliance on their own evaluation of scientific information 1

Lisa Scharrera*, Rainer Brommea, M. Anne Brittb & Marc Stadtlera

a Department of Psychology, University of Muenster, Fliednerstrasse 21, 48149 Muenster,

Germany, Email: [email protected]; [email protected]; stadtlm@uni-

muenster.de

b Psychology Department, Northern Illinois University, 363 Psychology-Math Building,

Dekalb, IL60115, USA, Email: [email protected]

* Corresponding author: Westfälische Wilhelms-University, Department of Psychology,

Fliednerstrasse 21, 48149 Muenster, Germany. Email: [email protected]; Tel.:

+49 251 8339379; Fax: +49 251 8339105

1 Part of this work has been published as a contribution to the 33rd annual meeting of the Cognitive Science Society, 2011, Boston, USA.

5

10

15

Acknowledgments

This research was supported by the Deutsche Forschungsgemeinschaft (DFG), grant

BR1126/6-1.

Abstract

The present research investigated whether laypeople are inclined to rely on their own

evaluations of the acceptability of scientific claims despite their knowledge limitations.

Specifically, we tested whether laypeople are more prone to discount their actual dependence

on expert knowledge when they are presented with simplified science texts. In two

experiments, participants read scientific arguments that varied in comprehensibility and type

of argument support and therefore in apparent easiness. We assessed participants’ inclination

to rely on their own evaluation rather than deferring to expert advice when judging argument

persuasiveness. The results showed that laypeople were more strongly persuaded by

apparently easy arguments than by difficult ones. Furthermore, they were more confident in

their own evaluation of the information and less inclined to turn to an expert for decision-

making support after reading easy compared to difficult arguments.

Keywords: knowledge evaluation; expertise; argument comprehensibility; causal

explanations; evidence

1. Introduction

Laypeople frequently encounter situations where they must make decisions about science-

related issues. Whether making up one’s mind about undergoing a particular medical

5

20

25

30

35

40

treatment or deciding whether certain behaviors are detrimental to the environment, laypeople

must judge scientific claims and the arguments provided to support such claims. While the

Internet has reduced problems of access to scientific information, a major challenge remains

in the evaluation of this information, regarding its acceptability, usefulness and sufficiency

for solving a problem at hand (Blair & Johnson, 1987; Brand-Gruwel & Stadtler, 2011;

Bromme, Kienhues & Porsch, 2010; Goldman, 2011; Kienhues, Stadtler & Bromme, 2011;

Mason, Ariasi & Boldrin, 2011; Wiley et al., 2009). One reason why evaluation presents such

a challenge lies in the specific nature of science knowledge. Advances in science and

technology have led to enormous growth of scientific knowledge, accompanied by

specialization and differentiation. To manage this complexity, scientific knowledge is

organized into different disciplines that are represented by specialized experts (Keil, Stein,

Webb, Billings & Rozenblit, 2008), constituting a ‘division of cognitive labor’ (Keil et al.,

2008, Putnam, 1975; Stehr, 1994).

Due to such division of cognitive labor in modern societies, we remain laypeople throughout

our whole lifetime regarding most topics and domains of science. Laypeople generally lack

the high level of background knowledge and specialized training required to come to a well-

founded decision about the veracity and thus the appropriate persuasiveness of scientific

claims. This is what sets laypeople apart from ‘experts’, who are specifically trained and

experienced in a domain, and from ‘novices’, who are not yet experts, but strive to become

experts through relevant training and will already have acquired basic domain knowledge and

a certain discipline-based perspective (Bromme, Rambow & Nückles, 2001).

Hence, when laypeople are in a situation which requires them to decide about a complex

science-related issue, they are confronted with a paradox: On the one hand they need to come

to a decision but on the other hand they lack the expert knowledge necessary for making well-

founded decisions. This problem has been discussed predominately in health and medical

45

50

55

60

65

research focusing on patient decision-making. In this context, the ideal of ‘informed decision-

making’ has been proposed, according to which individuals make their own judgment (for

example about a certain treatment option) only if they have acquired sufficient knowledge and

information about the issue at hand to do so (Bekker et al., 1999; Charles, Gafni & Whelan,

1997). Such information might be delivered by the expert (the doctor in charge) or the

layperson might find it elsewhere, nowadays mostly on the Internet (Fox, 2005). Even if this

ideal of ‘informed decision-making’ is realized, laypeople must judge if their understanding

of the issue is sufficient, or if it is still necessary to defer to an expert. Judging which of these

two strategies is the appropriate course of action logically requires that the individual can

correctly assess the sufficiency of their own knowledge for the decision task at hand. In other

words, they have to successfully engage in metacognitive calibration by forming an accurate

awareness of what they know and what they do not know (Glenberg & Epstein, 1985; Pieschl,

2009).

Unfortunately, as Keil (Keil et al., 2010) and his group have shown, laypersons tend to

overestimate their understanding of complex science- and technology-based issues. If

laypeople fail to recognize their own knowledge limitations they might falsely assume that

their understanding of the gathered information related to a certain knowledge claim enables

them to evaluate the persuasiveness of the claim. As a consequence, they might underestimate

their actual need to defer to an expert to reach a truly informed decision. Conversely, only if

individuals feel they lack sufficient understanding of the matter should they refrain from

relying on their own evaluations. Ascertaining the conditions under which laypeople become

aware of their own epistemic limitations when they process science-based text information

remains an open empirical question. In this investigation, we are interested in the

characteristics of science texts that might affect the development of such awareness.

10

70

75

80

85

90

Judging scientific claims is no simple and straightforward act. Persuasiveness itself is a

graded concept. Persuasiveness of arguments can be indicated by intensity of claim agreement

and evaluation of argument strength, two measures that feature prominently in persuasion

research (O’Keefe, 2002). Both can be expected to be closely related but are nevertheless not

equivalent. While lay recipients may perceive the evaluation of argument strength as a more

abstract, comparably ‘safe’ task without any major consequences, expressing agreement with

a claim must be approached more cautiously, since claim agreement might be regarded as

closely connected to actual behavioral consequences.

To summarize, laypeople’s reliance on their own evaluations should manifest itself in their

inclination to be easily persuaded by provided information. In contrast, if recipients do not

feel competent to decide about information quality, they should be more hesitant in their

judgments and refrain from indicating a strong opinion. Furthermore, an inclination to rely on

their own evaluations should be reflected even more directly by laypersons’ confidence in

their own decision about a claim. Strong confidence in a decision should manifest itself in

high levels of trust in the own judgment and conversely in a weak desire to turn to an expert

for decision support. This pattern is consistent with the concept of informed decision-making

(Bekker et al., 1999), which implies that only if recipients feel that they are sufficiently

informed and competent to form an opinion about the subject matter should they let

themselves be persuaded by a given argument.

1.1 Characteristics of science information

It may be particularly difficult for laypeople to recognize that they should not rely on their

own judgment when they encounter scientific information written in a way that makes the

subject matter appear less complex and easier to evaluate than it actually is. Such is the case

when laypeople read scientific information especially prepared for their consumption, i.e.

95

100

105

110

115

presented in a simplified way in order to make it superficially comprehensible for the lay

public (Kajanne & Pirttilä-Backman, 1999; Wagner, Elejabarrieta & Lahnsteiner, 1995;

Zimmerman, Bisanz, Bisanz, Klein & Klein, 2001). When laypeople encounter simplified

texts, their sense of understanding may mislead them to consider the subject matter easy and

uncomplicated and to judge their mental representations of the described phenomena as more

complete and accurate than they actually are (cf. Goldman & Bisanz, 2002). Such an

impression may manifest itself in the conviction that their knowledge and skills do not differ

meaningfully from that of an expert in the field. In this case, laypeople would fail to

differentiate between the background knowledge required to comprehend presented

information and the background knowledge required to evaluate the acceptability of the

information for justifying a given claim. Comprehending simplified information might thus

lead laypeople to regard the described phenomena and mechanisms to be so transparent and

self-evident that they themselves are able to evaluate the information (cf. Keehner & Fischer,

2011) and that deferring to an expert is an unnecessary waste of time and energy. Therefore, if

presented with a simple account, laypeople should be inclined to rely on their own content

evaluation. If, however, the information is presented in a way that does not conceal the

inherent difficulty of the subject matter, then laypeople should refrain from relying solely on

their own judgement about information persuasiveness (cf. Reber & Schwarz, 1999;

Schwartz, 2004).

1.2 Influences on perceived easiness of scientific contents

The assumption, that easy information causes recipients to rely more strongly on their own

evaluation of scientific claims, raises the question as to what characteristics make scientific

information appear easy to laypeople. We presume that perceived easiness is influenced by at

least two message characteristics: (1) comprehensibility of the depicted information and (2)

120

125

130

135

140

type of argument support which is presented to back a claim. Our assumption is based on

theoretical considerations from previous literature and empirical findings which show both

characteristics to have an impact on the persuasiveness of arguments (e.g. Brem & Rips,

2000; Eagly, 1974; Slusher & Anderson, 1996).

1.2.1 Information comprehensibility

Previous findings have shown that people agree more strongly with claims presented as part

of comprehensible arguments than with those that are difficult or impossible to comprehend

(e.g. Bradley & Meeds, 2004; Murphy, Long, Hollerana & Esterly, 2003). However, research

on this issue has mainly dealt with arguments supporting moral claims and arguments

advertising the usefulness of consumer products. Due to this focus on moral claims and

advertisement, it remains unclear whether the observed comprehensibility effect also applies

to scientific arguments. In fact, the question of whether it generalizes is not trivial. In the case

of moral arguments or arguments intended to advertise everyday products, recipients may

perceive their own values and personal experiences as a sufficient basis for argument

evaluation. However, in the case of scientific information laypeople might be aware that its

evaluation is usually beyond their capabilities—regardless of whether they feel that they

understand a piece of information. If this is the case, laypeople should not be inclined to agree

more strongly with a scientific claim supported by comprehensible than by incomprehensible

information.

Two studies did in fact investigate the persuasive effect of comprehensibility for scientific

arguments. While Miller et al. (1976) detected no difference in persuasiveness between

comprehensibility conditions, Eagly (1974) found incomprehensible arguments to cause

weaker claim agreement than comprehensible arguments. However, neither study adequately

answers the question of whether laypeople are more persuaded by arguments they perceive as

15

145

150

155

160

165

easy. In Miller et al.’s (1976) study, it remains unclear whether the manipulation of

comprehensibility was indeed successful, as no manipulation check was conducted.

Furthermore, Eagly’s (1974) study investigated a situation of expert–novice communication

rather than of expert–layperson communication. Participants (psychology students) were

required to evaluate contents from their own field of study and, due to their prior training,

may have perceived themselves as principally able to evaluate the given message.

In sum, previous findings on moral issues and advertisement support the notion that

comprehensibility has an influence on argument persuasiveness. However, it is not clear

whether comprehensibility also affects the persuasiveness of scientific arguments and whether

any persuasiveness effects of comprehensibility extend to laypeople’s confidence in their own

judgments of the provided information.

1.2.2 Type of argument support

Previous research has differentiated between two types of argument support that can back a

causal claim: causal support and empirical support. Causal support explains the mechanism

underlying the claimed causal connection (e.g. ‘Beta blockers decrease high blood pressure,

because they inhibit the activity of hormones which cause blood pressure-raising stress

reactions in the body.’). In contrast, empirical support backs the claim by referring to relevant

statistical data (e.g. ‘Beta blockers decrease high blood pressure, because blood pressure was

decreased by 20-28% among patients who regularly took beta blockers.’) (for comparable

distinctions see Brem & Rips, 2000; Koslowski, 1996; Sandoval & Cam, 2011; Slusher &

Anderson, 1996). In other words, whereas causal support answers the question ‘Why are two

concepts causally related?’, empirical support answers the question ‘How do we know that

two concepts are related?’ (Glassner, Weinstock & Neuman, 2005). Kuhn (1991) considers

only evidence (i.e. empirical support) as ‘genuine evidence’ while discounting explanations

170

175

180

185

190

(i.e. causal support) which are not complemented by additional data as ‘pseudoevidence’.

According to Kuhn, explanations serve to establish the plausibility of a claim, which is neither

necessary nor sufficient to supporting its believability. Rather, it is only evidence that

provides direct support for the correctness of a causal claim. This notion is also prevalent in

the empirical sciences, where the most highly regarded justification for claims is empirical

evidence.

However, despite the prominent role that evidence plays in empirical science, previous

literature regards explanations as a particularly persuasive form of argument support (Gopnik,

2000; Lombrozo, 2006; Trout, 2002) and suggests that laypeople prefer causal support as

claim justifications, possibly because its evaluation is perceived as easier. According to Keil

(2006, 2010), individuals have a sophisticated sense for causal relations and structure and

seek out explanations. These activities form the essence of individuals’ folk science. Thus,

laypeople may consider causal support as easier to evaluate than empirical support, since

causal support more closely reflects the kinds of claim justifications they consider in everyday

life. This assumption is also in line with Brem and Rips’ (2000) reasoning that explanations

provide a method for vetting claims by allowing individuals to judge the described

mechanisms for internal coherence and consistency with prior knowledge. Due to allowing

this kind of vetting, laypeople may perceive explanations as adequate subjects for their own

evaluation. Laypeople should therefore be more easily and confidently persuaded by

arguments containing causal support than by arguments containing empirical support.

The persuasive effect of causal support compared to empirical support has been investigated

in several studies. However, while findings by Slusher and Anderson (1996) indeed confirm

that arguments containing causal support cause greater claim agreement than arguments

containing empirical support, Brem and Rips (2000) conversely found recipients to prefer

evidence over explanations. Similarly, in a study by Sandoval and Cam (2011), 8 to 10 year

20

195

200

205

210

215

old children were shown to slightly prefer empirical over causal support. Overall, findings on

the persuasive impact of scientific causal and empirical support indicate that there is indeed a

difference in persuasiveness between the two support types. Although theoretical

considerations suggest a persuasive advantage of causal over empirical support from a

layperson’s point of view, results are mixed as to the direction of the persuasive difference. A

possible explanation for the inconsistency of findings is that type of argument support and

comprehensibility might have been confounded in at least some studies. In cases where

empirical support had been more comprehensible than its causal counterpart, the perceived

easiness of comprehensible support might have outweighed the easiness ascribed to causal

support. Moreover, and similar to the state of affairs regarding comprehensibility, the effect of

support type on recipients’ confidence in their decision about information persuasiveness has

not been investigated directly. Thus there is a need to assess whether and how support type

influences laypeople’s readiness to rely on their own evaluations.

1.3 The present research

In two experiments we investigated how features of science depictions influence lay

recipients’ inclination to rely on their own evaluations of scientific information. Therefore, we

investigated how scientific argumentative texts with different levels of comprehensibility and

argument support influence argument persuasiveness and laypeople’s confidence of their

claim agreement decision.

We expect that if scientific information is presented in a way that makes it difficult for

laypeople to process, they will be more likely to realize that as non-experts they are in fact

unable to decide whether the information poses a sound argument to support a given claim.

Specifically, we hypothesized that recipients would be more intensely persuaded by

comprehensible than incomprehensible arguments. Therefore, reading comprehensible

220

225

230

235

240

arguments should lead to greater agreement with the claim than reading incomprehensible

arguments (hypothesis 1a) and laypeople should evaluate comprehensible arguments as

stronger (i.e. more supportive of the claim) than incomprehensible arguments (hypothesis 1b).

Analogously, based on the assumption that causal support is perceived as easier by laypeople,

arguments containing causal support should be more persuasive than arguments containing

empirical support. Therefore, we expect that reading causal support will lead to stronger claim

agreement than empirical support (hypothesis 2a) and moreover, that laypeople will evaluate

arguments containing causal support as stronger than those containing empirical support

(hypothesis 2b).

We also expect that laypeople would be more confident in their own decision about the

acceptability of provided scientific information when information processing is easy than

when it is difficult. Therefore, comprehensible arguments should cause laypeople to have

greater trust in their own decision about the claim (hypothesis 3a) and conversely a weaker

desire to consult an expert for further decision-making support than they would have with

incomprehensible arguments (hypothesis 3b). Similarly, receiving arguments with causal

support should lead lay recipients to trust more strongly in their own decision about the claim

(hypothesis 4a) and to have a weaker desire to consult an expert than when they receive

arguments with empirical support (hypothesis 4b).

2. Experiment I

To investigate our hypotheses, we conducted a first experiment in which participants read

texts about scientific topics which provided support for a claim. The extent to which

laypeople were persuaded by the provided information and their confidence in the claim

agreement decision were assessed.

245

250

255

260

265

2.1 Method

2.1.1 Design and Participants

The experiment was conducted with a 2x2 repeated measures design, the independent

variables being argument comprehensibility (comprehensible vs. incomprehensible) and type

of argument support (causal vs. empirical). Each participant was assigned to all four

experimental conditions. In each condition, participants were asked to read an argument about

a medical topic. Thus, every recipient read four arguments in total: one comprehensible

argument containing causal support, one comprehensible argument containing empirical

support, one incomprehensible argument containing causal support and one incomprehensible

argument containing empirical support. The order of exposure to experimental conditions was

counterbalanced among participants by using a Latin square design. Hence, across the sample,

arguments from each condition were equally often presented in each position.

Eighty-eight undergraduates (52 female, 36 male, mean age = 25.66 years, SD = 5.13) of

various majors at a German university took part in the study and received 8 Euros for their

participation. Thus, the sample consisted of adult participants who had undergone at least

some academic training and who should therefore have developed a generally sophisticated

understanding of the nature of knowledge, making them principally able to appropriately

handle scientific knowledge questions (e.g. King & Kitchener, 1994). To ensure participants’

lay status, students of medicine, biology or related subjects and students of empirical sciences

(including psychology), who can be assumed to be particularly familiar with empirical

argument support, were excluded from participation.

2.1.2 Materials

Expository texts about four medical issues were created (mean text length = 80.5 words, SD =

16.46). The texts contained concepts and relations that were derived from real-world concepts

25

270

275

280

285

290

but were by themselves imaginary. The use of imaginary contents was aimed to keep readers’

prior knowledge as low as possible. Furthermore, it should reduce the likelihood that

participants had already formed a strong prior attitude about the specific issues, as prior

attitude has been shown to have an influence on individuals’ agreement with an argument

(Erb, Bohner, Rank & Einwiller, 2002). All texts were written by the first author of this paper,

with the help of authentic online texts of different types and from various sources that dealt

with related medical issues, for example texts from scientific journals such as European

Journal of Lipid Science and Technology, popularized science texts intended for lay

audiences such as information from www.netdoctor.de (a German website offering

information on various health topics), and websites offering a mixture of simplified contents

and more specialized in-depth information such as Wikipedia.

Each text consisted of an argument that supported an issue-related causal claim (claim 1:’The

intake of Eratomin leads to a decrease of the blood cholesterol level.’, claim 2: ‘The Burkitt-

Virus causes the development of the Devic Syndrome in the body.’, claim 3: ‘A side-effect of

Rethoxat is that it brings about asthma attacks.’, claim 4: ‘The intake of Lisinorase helps

against high blood pressure.’). The claim was always stated at the beginning of the argument,

followed by information that acted as claim support. For every text, four variations were

created, analogous to the experimental conditions: In the causal support conditions, the claim

was supported by an explanation of the underlying mechanism and in the empirical support

conditions by statistical data. For example, presented below is the English translation of the

following four argument versions that support the claim ‘The intake of Eratomin leads to a

decrease of the blood cholesterol level’:

(1) Comprehensible argument containing causal support:

‘The intake of Eratomin leads to a decrease of the blood cholesterol level.

295

300

305

310

315

This is because the body is provided with cholesterol through absorbing it from food.

During the absorption process in the small intestine, specialized proteins split fats that are

contained in the food into cholesterol. Eratomin binds these proteins and by this means,

the proteins lose their ability to split fat molecules. As consequentially the amount of

cholesterol that is absorbed from food is decreased, the amount of cholesterol in the body

is lowered.’

(2) Incomprehensible argument containing causal support:

‘The intake of Eratomin leads to a decrease of the blood cholesterol level.

This is because cholesterol absorption in the intestinum tenue is controlled by PKA-

regulated lipases. In this process, triacylglycerol from food is split into mono- and

diglycerid as well as cholesterol. Eratomin acts as ligand of carboxylesterase and thus as

competitive inhibitor, because the carboxylesterase-ligand-complex has no hydrolytic

effect. This leads to a reduction of the cholesterol absorption rate and consequentially to a

decrease of the amount of high- and low-density lipoprotein in the sanguis.’

(3) Comprehensible argument containing empirical support:

‘The intake of Eratomin leads to a decrease of the blood cholesterol level.

This is because studies have shown that after a daily intake of 30mg Eratomin over a

period of six months, blood cholesterol levels are decreased by 20-28%. In contrast,

among a comparison group of patients who in the same period of time did not take

Eratomin, there was no lowering of cholesterol levels. This was attested by measurements

of the amount of cholesterol in the blood of both patient groups at the beginning and at

the end of the six months period. For this purpose, cholesterol particles were chemically

30

320

325

330

335

340

separated from the other component parts of the blood. The amount of cholesterol in the

blood could by this means be determined at both times of measurement.’

(4) Incomprehensible argument containing empirical support:

‘The intake of Eratomin leads to a decrease of the blood cholesterol level.

This is because studies among patients have shown a reduction of the amount of large-

bouyant and intermediate-dense as well as triglyceride-rich lipoproteins in the haima after

daily application of 30mg Eratomin over a period of six months by 20-28%. In a verum

free group there was c.p. no decrescence. This resulted from pre and post interventionally

applied capillary electrophoresis on all patients’ sanguis during which the sample

injection of CE (48 nl) was executed hydrodynamically.’

Comprehensibility of both argument support types was manipulated in three ways. First,

incomprehensible arguments contained unexplained technical terms, whereas in the

comprehensible condition these terms were translated into more familiar words (Nagy, 1988)

(e.g. ‘lipase’ in the incomprehensible condition was replaced by ‘specialized proteins’ in the

comprehensible condition). Second, incomprehensible support comprised unnecessary,

distracting details, which were not required to support the argument claim (Trabasso, Secco &

van den Broek, 1984) (e.g. the fact that lipase is PKA-regulated). Third, in comprehensible

argument support, important information was repeated. According to Tyler (1994), repetition

contributes to discourse comprehensibility by signaling recipients how to incorporate new

information into the progressing discourse, thus facilitating perceptions of text coherence (e.g.

‘This is because the body is provided with cholesterol through absorbing it from food. During

the absorption process [...]’; NB repeated information was not underlined in the original

345

350

355

360

365

material). However, comprehensibility manipulations were only applied to the argument

support while the claim was stated in the same wording in all conditions.

Participants were provided with one argument from each experimental condition, whereby

each argument addressed a different medical topic. Before reading each argument,

participants were presented a framing story to provide a context for reading (Stadtler &

Bromme, 2008). In this framing story, the participant was asked to imagine that a good friend

who had a particular medical problem was unsure whether a certain problem-related claim

was true or false and asked the participant about their opinion. After reading the framing

scenario, participants were asked to imagine that they had researched the Internet for

information to find an answer to the friend’s question. Participants were then presented with

an argument text which dealt with the medical topic described in the scenario, and which they

had allegedly found during their search.

In each experimental condition, participants were presented with a separate framing scenario,

fitting to the respective medical topic of the argument. In order to keep the social role ascribed

to the online source from which the argument stemmed constant between conditions, each

argument was described as being authored by a medical expert but without providing a name

or any further information about the author. No indication was given to participants that the

texts they read in the different conditions were authored by the same source. Rather,

participants should have been deterred from assuming the same author behind every text,

since they were required to imagine themselves in completely different scenarios in every

condition.

2.1.3 Dependent measures

Manipulation check. To assess whether comprehensibility had been manipulated as intended

between the argument versions, participants evaluated each argument for perceived

370

375

380

385

390

comprehensibility on a scale from 1 (‘very incomprehensible’) to 7 (‘very comprehensible’).

Since comprehensibility might be interpreted differently by different readers (Wiley, Griffin

& Thiede, 2005), participants were provided with a short definition of what the experimenters

meant by comprehensibility to ensure that each participant judged the arguments by

comparable standards. This definition was formulated as follows, by drawing on components

of Jucks’ (2001) questionnaire to assess laypeople’s perspective on text comprehensibility

(translated from German): ‘By comprehensible we mean that you perceive the text as vivid,

that you feel you can distinguish essential from rather unimportant information, that you think

you can judge whether the single statements are consistent and not in conflict with one

another, and that you feel you can clearly and comprehensively understand and connect the

single statements made by the author.’

Argument persuasiveness. The extent to which participants were persuaded by the information

contained in the argument texts was measured by two variables:

(1) Claim agreement. Participants’ agreement with each claim was assessed prior and

subsequent to reading the claim-supporting argument. Participants were asked to indicate their

agreement on a 1 (‘I don’t agree at all’) to 7 (‘I totally agree’) scale; the scale did not

comprise a separate ‘I don’t know’ option. The pre-measure of claim agreement was

administered to ensure that laypeople’s prior opinion about the claims did not coincidentally

differ between experimental conditions.

(2) Argument strength. Participants were asked to evaluate the strength of each argument on a

1 to 7 scale, where 1 meant ‘the argument provides no support for the claim’ and 7 meant ‘the

argument provides strong support for the claim’.

35

395

400

405

410

415

Confidence in the claim agreement decision. Participants’ confidence in their own capability

to make a claim agreement decision was indicated by two measures:

(1) Trust in one’s own judgment of the claim correctness. After reading each argument,

participants indicated on a 1 to 7 scale how strongly they agreed with the statement ‘I am

confident in my own decision about whether it is true that [claim statement inserted]’ (1:

‘don’t agree’, 7: ‘strongly agree’).

(2) Desire to consult an expert for additional decision support. After reading the argument ,

participants were asked how strongly they agreed with the statement ‘Before I decide about

whether it is true that [claim statement inserted], I would like to seek further advice from an

expert’ on a 1 to 7 scale (1: ‘don’t agree’, 7: ‘strongly agree’).

2.1.4 Procedure

Data collection took place in group sessions with a maximum of eight participants who

worked individually on a booklet containing the arguments and scales for collecting the

dependent measures. The booklet first presented participants with a scenario in which the

fictitious friend’s problem was described and therefore the frame story for the participants’

task to read the argument. The pre-measure of participants’ claim agreement was collected

and then participants read the argument, which they had allegedly found while doing Internet

research to find information regarding their friend’s problem. Afterwards, they provided their

post-measure of claim agreement as well as their trust in their own judgment and desire to

consult an expert. This was repeated four times, so that each participant read four arguments,

one of each experimental condition. After the described measures were collected for all four

arguments, readers were presented again with each argument and were asked to evaluate its

strength and comprehensibility. Participants then completed a demographic questionnaire and

420

425

430

435

were finally debriefed about the fictitious nature of the presented arguments. On average, the

session lasted about 45 minutes.

2.2 Results

Means and standard deviations of the dependent measures as a function of comprehensibility

and argument support type are shown in Table 1; in the text we will additionally refer to

pooled data (estimated marginal means and standard errors) that was used for statistical

analyses. To determine the statistical power achieved when analyzing the data with repeated

measures F-Tests, a post hoc power analyses using G*Power (Faul, Erdfelder, Lang &

Buchner, 2007) was conducted. The analysis with alpha error probability set at .05 and

correlations between levels of the repeated measures factors set at zero showed that the design

and sample size of the experiment yielded a power of .91 for detecting medium sized effects (f

= .25 according to Cohen, 1988).

--- Insert Table 1 about here ---

2.2.1 Test for order effects

First, we tested whether order of argument presentation had any effects on the dependent

measures. By this means we intended to rule out the possibility that the reading of earlier

presented arguments supported readers’ understanding of later presented arguments due to

their structural similarity, which might have interfered with our manipulation of argument

comprehensibility. For this purpose, we conducted repeated measures ANOVAs with

argument order (position 1 to 4) as within-participant-factor on scores of comprehensibility,

argument strength, pre- and post-measures of claim agreement, trust in own agreement

40

440

445

450

455

460

decision and desire to consult an expert. The results revealed no significant effects of

argument order on any of the dependent variables (all F < 1.96, ns).

2.2.2 Manipulation check: Perceived argument comprehensibility

A repeated measures ANOVA on comprehensibility ratings with the within-participant-

factors comprehensibility (comprehensible vs. incomprehensible) and type of argument

support (causal vs. empirical) yielded a significant main effect of argument comprehensibility,

F(1,87) = 744.053, p < .001, part. η2 = .895. An inspection of group means revealed that as

intended, arguments designed as comprehensible were considered as more comprehensible

(estimated marginal means (EMM) = 6.102, SE = .097) than arguments designed to be

incomprehensible (EMM = 2.028, SE = .112). Neither the main effect of support type nor the

support type*comprehensibility interaction was significant (both F(1,87) < 1.9, ns). Thus, the

manipulation check confirmed comprehensibility to vary orthogonally to type of argument

support.

2.2.3. Argument persuasiveness

We had expected laypeople to be more intensely persuaded after they had read

comprehensible compared to incomprehensible arguments and after they had read causal

argument support compared to empirical support. This should be indicated by greater claim

agreement (hypothesis 1a) and higher ratings of argument strength (hypothesis 1b). Moreover,

we had hypothesized that causal argument support leads to greater claim agreement

(hypothesis 2a) and higher ratings of argument strength (hypothesis 2b) compared to

empirical support.

(1) Claim agreement. Our hypotheses with regards to participants’ claim agreement were

tested by subjecting pre-and post-measures of claim agreement to a repeated measures

465

470

475

480

485

ANOVA. Comprehensibility, argument support type and time of measurement (prior vs.

subsequent to reading the argument) acted as within-participant-factors. Results showed a

significant main effect of time of measurement, F(1,87) = 119.048, p < .001, part. η2 = .578,

which was due to participants agreeing more to the claim after (EMM = 4.969, SE = .092)

than prior to reading the argument (EMM = 3.989, SE = .075). With regards to the influence

of comprehensibility on claim agreement, no significant main effect of comprehensibility was

found, F(1,87) = 1.338, ns, but a significant comprehensibility*time of measurement interaction,

F(1,87) = 7.482, p < .01, part. η2 = .079. As a follow-up, separate repeated measures ANOVAs

on the pre- and post-measures of claim agreement were conducted. Results showed that

whereas claim agreement did not differ between comprehensibility conditions prior to

argument reception, F(1,87) =.269, ns, there was a significant difference between

comprehensible and incomprehensible arguments after participants had read the argument,

F(1,87) = 5.659, p < .05, part. η2 = .061. In line with our expectations (hypothesis 1a), group

means revealed that agreement with the claim was higher after reading comprehensible

(EMM = 5.08, SE = .101) compared to incomprehensible arguments (EMM = 4.858, SE

= .105). In contrast to our expectations regarding the influence of support type (hypotheses

2a), we found neither a significant main effect of support type, nor a significant interaction of

support type and time of measurement, both F(1, 87) < 2.625, ns. The three-way interaction of

comprehensibility, support type and time of measurement was also not significant, F(1, 87)

= .795, ns.

(2) Perceived argument strength. To test our hypotheses regarding argument strength, we

conducted a repeated measures ANOVA on argument strength ratings with comprehensibility

and support type as within-participant-factors. Results revealed a significant main effect of

comprehensibility, F(1,87) = 11.41, p < .001, part. η2 = .562. Descriptive statistics showed that

as expected (hypothesis 1b), lay recipients judged comprehensible arguments as stronger

490

495

500

505

510

(EMM = 5.506, SE = .107) than incomprehensible arguments (EMM = 3.733, SE = .147).

Furthermore, we found a main effect of support type, F(1,87) = 13.067, p < .001, part. η2 = .131,

which confirmed hypothesis 2b: Arguments containing causal support were rated as stronger

(EMM = 4.892, SE = .125) than arguments containing empirical support (EMM = 4.347, SE =

.121). The interaction of comprehensibility and support type did not reach significance, F(1,87)

= 1.232, ns.

2.2.4 Confidence in the claim agreement decision

We had furthermore hypothesized laypeople to be more confident in their own decision about

the claim after they had read comprehensible compared to incomprehensible arguments,

indicated by a higher trust in their own decision (hypothesis 3a) and a weaker desire to

consult an expert (hypothesis 3b). In addition, we expected causal argument support to lead to

higher levels of trust in one’s own decision (hypothesis 4a) and to a weaker desire to consult

an expert (hypothesis 4b) compared to empirical support.

Pearson’s correlation of both confidence measures showed that trust in the own agreement

decision and desire to consult an expert were negatively associated in all experimental

conditions (comprehensible causal: r = -.529, incomprehensible causal: r = -.406,

comprehensible empirical: r = -.418 and incomprehensible empirical: r = -.421; all

correlations were significant at a .01 level).

(1) Trust in own agreement decision. To test our hypotheses about the influence of

comprehensibility and support type, we conducted a repeated measures ANOVA on trust

scores with comprehensibility and support type as within-participant-factors.

As to the influence of argument comprehensibility on trust scores, we found that after

argument reception, trust was higher in the comprehensible (EMM = 4.125, SE = .162) than in

the incomprehensible conditions (EMM = 3.483, SE = .177), F(1,87) = 19.424, p < .001, part. η2

45

515

520

525

530

535

= .183, which supports hypothesis 3a. With regards to the influence of support type on trust in

own decision, no significant main effect of support type was yielded, F(1,87) < 1.682, ns, which

is in contrast to our expectations (hypothesis 4a). Finally, the interaction of comprehensibility,

and support type was not significant, F(1, 87) = .127, ns.

(2) Desire to consult an expert. Our hypotheses regarding the desire to consult an expert were

tested by conducting a repeated measures ANOVA on desire ratings with comprehensibility

and support type as within-participant-factors. Concerning our expectations for argument

comprehensibility, results showed that, in line with hypothesis 3b, desire to seek out expert

advice was significantly higher after reading incomprehensible (EMM = 6.205, SE = .116)

than comprehensible arguments (EMM = 5.858, SE = .141), F(1, 87) = 7.527, p < .01, part. η2

= .080. However, results revealed no significant differences in desire scores between both

support types, F(1,87) < 3.379, ns. Hypothesis 4b was therefore not confirmed. Finally, no

significant interaction of comprehensibility and support type was found, F(1, 87) = .841, ns.

2.3 Discussion

The results obtained in Experiment I show that comprehensibility of scientific texts clearly

influences laypeople’s reliance on their own evaluations of scientific claims. In line with

hypotheses 1a/b, participants were more intensely persuaded by comprehensible than

incomprehensible arguments, which was indicated by a greater inclination to agree with the

argument claim and higher ratings of perceived argument strength. Moreover, as we had

expected, laypeople were more confident in their agreement decision after reading

comprehensible arguments. In this case, they showed higher levels of trust in their own

decision about the claim and conversely perceived themselves less in need of additional

expert advice than after reading incomprehensible arguments, thus supporting hypotheses

3a/b.

540

545

550

555

560

Results obtained for type of argument support partly differ from the pattern observed for

comprehensibility. Although in line with hypothesis 2a lay recipients evaluated arguments

containing causal and empirical support differently regarding argument strength, with

arguments containing causal support being slightly preferred, this did not translate to their

actual agreement with the claim or to their confidence in their claim agreement decision.

Hypotheses 2b and 4a/b were therefore not supported.

A possible explanation for type of argument support yielding only a small effect on evaluation

of argument strength and no effect at all on claim agreement and confidence in the agreement

decision might be that our participants had difficulties recognizing the difference between

both support types and thus did not perceive one type as easier to evaluate than the other.

Previous research indicates that laypeople frequently fail to distinguish between explanations

and evidence (Kuhn, 1991 but see also Brem & Rips, 2000). It therefore remains unclear

whether the observed limited influence of support type is due to laypeople perceiving causal

and empirical support as equally easy to judge or due to laypeople’s failure to recognize the

difference between causal and empirical support.

It is important to note that the participants in Experiment I may have already had some

experience in science. On average they were in their 7th semester (M = 7.71, SD = 4.91),

corresponding to the later phase of undergraduate studies, and might have been more familiar

with scientific research methods and empirical evidence than the ‘average layperson’.

Consequently, participants may have had greater awareness of the controversial and tentative

nature of scientific findings and attributed a greater value to empirical argument support than

laypersons would who had less or no science exposure. This might have made them generally

resistant to strongly and confidently agreeing with a claim based on only one argument,

regardless of support type and comprehensibility, and this may have reduced variability of

responses between conditions (across conditions, the increase in claim agreement after

50

565

570

575

580

585

compared to before reading an argument was below one scale point on average, from M =

3.988 in the pre measure to M = 4.968 in the post measure, as was the decrease in

participants’ desire to ask an expert, from M = 6.63 in the pre measure to M = 6.033 in the

post measure). It remains an open question whether the detected effects might be more

pronounced in a less experienced population. Finally, only medical topics were used as

argument contents. Thus we cannot conclude with certainty whether the yielded results only

apply to medical arguments or can be generalized to other scientific domains as well.

3. Experiment II

We conducted a second experiment which was aimed at replicating the findings of

Experiment I, and extended them in three respects. First, in order to clarify whether our

participants are in fact able to recognize causal and empirical support as different support

types, we asked readers to categorize the arguments accordingly. Second, we sought to

investigate whether the effects yielded in Experiment I might be more pronounced in a

population that is scientifically less experienced than the students included in the first sample.

Finally, to extend the applicability of our findings beyond a medical context, we included

arguments from a second scientific field, climate research, in our set of materials.

3.1 Method

3.1.1 Design and Participants

The same 2x2 repeated measures design was used as in Experiment I, and again all

participants were presented with all four experimental conditions. Each participant read two

medical arguments and two arguments related to the topic of climate-change. The order of

exposure to experimental conditions was counterbalanced between participants by using a

Latin square design. Furthermore, the allocation of arguments to experimental conditions was

590

595

600

605

610

balanced so that across participants every condition was represented equally often by

arguments of the medical and climate domain.

Eighty-eight undergraduate students (66 female, 22 male, mean age = 21.36 years, SD = 2.84)

participated and received 8 Euros to compensate for their time. Forty participants were first-

year students with various majors at a German university who had begun their studies no

longer than one to two months prior to the experiment. The remaining 48 participants were

students of varying subjects at a University of Applied Sciences, an educational institution

whose curricula focus primarily on practical education for a profession and less on

preparation for research. By including university students who had only just begun their

studies as well as students from an educational institution which is less research-focused than

general universities, participants should represent a population that is scientifically less

experienced than participants in Experiment I.

3.1.2 Materials

Similar to Experiment I, the materials were written arguments about four imaginary scientific

issues (mean text length = 95.88 words, SD = 23.22). Two arguments supported a causal

claim on a medical topic (claim 1: ‘The intake of Periphenol leads to a decrease of the blood

cholesterol level.’, claim 2: ‘The formation of kidney stones is caused by an overfunction of

the adrenal medulla.’), and two arguments supported a causal claim on a topic related to

climate-change (claim 3: ‘High amounts of Aeropine in the environment contribute to the

causation of thunderstorms.’, claim 4: ‘The utilization of Soymethanol as fuel contributes to

global warming.’). As in Experiment I, four versions of each argument were created, varying

in comprehensibility and support type. Again, all texts were written by the first author of this

paper based on different types of online texts from various sources that dealt with related

medical and climate issues.

615

620

625

630

635

Participants were provided with four arguments, one of each experimental condition, and

every argument addressed a different medical or climate-related issue. Each argument text

was again embedded in a framing story. The framing scenarios of climate arguments

described a friend wondering about the environmental friendliness of certain behaviors (e.g.

whether or not using fuel that contains Soymethanol contributes to global warming). After

reading the framing scenario, participants were presented with the argument which was

allegedly found by the participant while doing Internet research to find an answer to the

friend’s problem. As in Experiment I, the texts were described as being written by an expert

without providing any further information about the author. Again, participants were given no

indication that the arguments they read in the different experimental conditions originated

from the same source.

3.1.3 Dependent measures

The same measures were collected as in Experiment I. Additionally, participants were asked

to judge whether the argument contained (1) causal or (2) empirical support or whether (3)

they could not decide between both options. Arguments containing causal support were

defined as describing ‘the processes and mechanisms that lead to the phenomenon mentioned

in the claim’. Arguments containing empirical support were defined as presenting

‘measurement data which were obtained through scientific studies and which show that the

phenomenon mentioned in the claim occurs.’

3.1.4 Procedure

The procedure followed that of Experiment I, with the addition that, at the end of the

procedure, participants categorized each text they had read for the type of argument support it

comprised. The session lasted about 45 minutes on average.

55

640

645

650

655

660

3.2 Results

Means and standard deviations of the dependent measures as a function of comprehensibility

and argument support type are presented in Table 2. To assess the statistical power of our data

analysis, a post hoc power analyses using G*Power (Faul et al., 2007) was conducted with an

alpha error probability of .05 and correlations between levels of the repeated measures factors

set at zero. Results showed that the design and sample size yielded a power of .91 for

detecting medium sized effects (f = .25 according to Cohen, 1988) when calculating repeated

measures F-tests.

--- Insert Table 2 about here ---

3.2.1 Test for order effects

To check for possible effects of argument presentation on the dependent measures, we

conducted repeated measures ANOVAs with argument order (position 1 to 4) as within-

participant-factor on scores of comprehensibility, argument strength, pre- and post-measures

of claim agreement, trust in own agreement decision and desire to consult an expert.

Analogous to Experiment I, results showed no significant effects of argument order on the

dependent variables (all F < 2.18, ns).

3.2.2 Manipulation check: Perceived argument comprehensibility

A repeated measures ANOVA on comprehensibility ratings with the two within-participant-

factors comprehensibility (comprehensible vs. incomprehensible) and type of argument

support (causal vs. empirical) revealed a significant main effect of argument

comprehensibility, F(1,87) = 511.888, p < .001, part. η2 = .855. Group means indicate that

arguments intended to be comprehensible were considered as more comprehensible (EMM =

665

670

675

680

685

5.71, SE = .105) than arguments written to be incomprehensible (EMM = 2.108, SE = .123).

There was neither a significant main effect of support type, F(1,87) = 1.465, ns, nor a significant

support type*comprehensibility interaction, F(1,87) = 1.517, ns, which confirmed

comprehensibility to vary orthogonally to support type.

3.2.3 Judgment of argument support type

Regarding the categorization of arguments as containing causal or empirical support,

participants performed well above chance level in all four conditions. Comprehensible

arguments containing causal support were correctly classified in 92.05% of the cases,

incomprehensible arguments containing causal support in 73.86% of the cases,

comprehensible arguments containing empirical support in 76.14% of the cases and

incomprehensible arguments containing empirical support in 72.72% of the cases. In the

comprehensible causal and incomprehensible empirical conditions, most errors were due to

readers picking the ‘undecided’ option (comprehensible causal: 4.5%, incomprehensible

empirical: 15.9%). In contrast, in both other conditions, the majority of errors were due to

participants misclassifying the support as the wrong type (incomprehensible empirical:

18.2%, comprehensible causal: 19.3%).

3.2.4 Argument persuasiveness

We had expected that laypeople would be more persuaded after reading comprehensible than

incomprehensible information, reflected by greater claim agreement (hypothesis 1a) and

higher ratings of argument strength (hypothesis 1b). Similarly, we expected that after reading

an argument containing causal support, laypeople would agree more with the claim

(hypothesis 2a) and rate the arguments as stronger (hypothesis 2b) than after reading an

argument containing empirical support.

60

690

695

700

705

710

(1) Claim agreement. Our hypotheses regarding participants’ claim agreement were tested

using a repeated measures ANOVA on pre-and post-measures of participants’ claim

agreement. Comprehensibility, support type and time of measurement (prior vs. subsequent to

argument reception) acted as within-participant factors. A significant main effect of time of

measurement was found, F(1,87) = 76.673, p < .001, part. η2 = .468, due to participants agreeing

more to the claim after having read the argument (EMM = 5.026, SE = .114) than before

reading (EMM = 4.116, SE = .088). The analysis yielded no significant main effect of

comprehensibility, F(1, 87) = .758, ns, but a significant comprehensibility*time of measurement

interaction, F(1,87) = 14.926, p < .001, part. η2 = .146. Whereas claim agreement did not differ

between comprehensibility conditions prior to argument reception, F(1,87) = 2.222, ns, there

was a significant difference between comprehensible and incomprehensible arguments after

participants had read the argument, F(1,87) = 11.199, p < .001, part. η2 = .114. In line with

hypothesis 1a, agreement with the claim was higher after reading comprehensible (EMM =

5.210, SE = .128) than incomprehensible arguments (EMM = 4.841, SE = .125). However,

there was no significant main effect of support type and no significant interaction of support

type and time of measurement, both F(1, 87) < .496, ns. Hypothesis 2a was therefore not

confirmed by our results. Furthermore, results yielded no significant three-way interaction of

comprehensibility, support type and time of measurement, F(1, 87) = .154, ns.

(2) Perceived argument strength. In order to test the hypotheses regarding argument strength,

a repeated measures ANOVA on strength ratings with comprehensibility and support type as

within-participant factors was conducted. Results confirmed hypothesis 1b by revealing a

significant main effect of comprehensibility, F(1,87) = 139.020, p < .001, part. η2 = .615: Across

support types, comprehensible arguments were regarded as stronger (EMM = 5.574, SE

= .095) than incomprehensible arguments (EMM = 3.614, SE = .162). As expected in

hypothesis 2b, we also found a significant main effect of support type, F(1,87) = 13.505, p

715

720

725

730

735

< .001, part. η2 = .134, due to arguments with causal support being rated as stronger (EMM =

4.835, SE = .121) than those with empirical support (EMM = 4.352, SE = .125). The

interaction of comprehensibility and support type was not significant, F(1,87) = .002, ns.

3.2.5 Confidence in the claim agreement decision

We had hypothesized that laypeople would be more confident in their own decision about the

claim after reading comprehensible than incomprehensible arguments, which should lead to a

higher trust in their own decision (hypothesis 3a) and a weaker desire for expert advice

(hypothesis 3b). Moreover, we had hypothesized higher levels of trust in one’s own decision

(hypothesis 4a) and a weaker desire to consult an expert (hypothesis 4b) after reading causal

compared to empirical support.

As in Experiment I, Pearson’s correlation showed that trust in one’s own agreement decision

and desire to consult an expert were negatively related in all conditions (comprehensible

causal: r = -.383, incomprehensible causal: r = -.476, comprehensible empirical: r = -.420 and

incomprehensible empirical: r = -.577; all correlations were significant at a .01 level).

(1) Trust in own agreement decision. To test our hypotheses regarding participants’ trust in

their own decision, a repeated measures ANOVA on trust scores with comprehensibility and

support type as within-participant factors was applied. Regarding the expected influence of

argument comprehensibility, results showed that trust was higher in the comprehensible

(EMM = 4.54, SE = .167) than in the incomprehensible conditions (EMM = 3.483, SE

= .165), F(1,87) = 41.465, p < .001, part. η2 = .323, providing support for hypothesis 3a. As to

the influence of type of argument support, results revealed that reading causal and empirical

support led to different levels of trust, F(1,87) = 9.204, p < .01, part. η2 = .096. In line with

hypothesis 4a, participants’ trust in their own decision was higher after reading causal support

(EMM = 4.21, SE = .157) than after reading empirical support (EMM = 3.813, SE = .159).

740

745

750

755

760

There was finally no significant interaction of comprehensibility and support type, F(1, 87) =

1.668, ns.

(2) Desire to consult an expert. Our hypotheses regarding laypeople’s desire to seek out

expert advice were tested using a repeated measures ANOVA with comprehensibility and

support type as within-participant factors. As to the influence of argument comprehensibility,

the results were consistent with hypothesis 3b; laypeople’s desire to consult an expert was

significantly higher after reading incomprehensible (EMM = 5.636, SE = .133) than

comprehensible arguments (EMM = 5.114, SE = .162), F(1, 87) = 21.802, p < .001, part. η2

= .200. With regards to the hypothesized influence of support type, we found no significant

main effect of support type, F(1, 87) < 1.807, ns. Hypothesis 4b was therefore not supported by

our data. Finally, the interaction of comprehensibility and support type did not reach

significance, F(1, 87) = 1.375, ns.

3.3 Discussion

The results of Experiment II demonstrate that participants were able to identify the type of

argument support presented. Readers were most successful in correctly classifying

comprehensible arguments containing causal support but were also clearly above chance level

in the three other conditions. Consistent with the results of Experiment I, Experiment II

showed comprehensibility to be an important influence on the persuasiveness of provided

information and confidence in the agreement decision. Comprehensible arguments led again

to more pronounced claim agreement and were rated as stronger than incomprehensible

arguments (as expected in hypotheses 1a/b). Moreover, in accordance with hypotheses 3a/b,

laypeople showed higher levels of trust in their own decision about the claim and a weaker

desire for expert advice after reading comprehensible arguments than after reading

incomprehensible ones. The results also revealed slight but inconsistent influences of support

65

765

770

775

780

785

type on persuasiveness of the provided information and confidence in the agreement decision.

Although in line with hypothesis 2a laypeople evaluated arguments providing causal support

as slightly stronger than those providing empirical support, this did not translate to their actual

agreement with the claim, thus contradicting hypothesis 2b. Moreover, although participants’

trust in their own decision was higher after reading causal support than after reading empirical

support (as predicted in hypothesis 4a), this was not reflected in in a differential desire to

consult an expert depending on what type of argument support they had read (hypothesis 4b).

Despite the similarity between the findings of the two experiments, there are also several

differences, particularly with regards to effect sizes. Compared to the first experiment, the

second experiment yielded stronger effects of comprehensibility on claim agreement (Exp. I:

part. η2 = .061 vs. Exp. II: part. η2 = .114), trust in own decision about the claim (Exp. I: part.

η2 = .183 vs. Exp. II: part. η2 =.323) and desire to consult an expert (Exp. I: part. η2 = .080,

Exp. II: part. η2 = .200). Thus, the presumably less scientifically experienced participants in

Experiment II agreed more strongly with the claim, showed greater trust in their own

decisions and perceived themselves less in need of expert advice after reading a

comprehensible compared to an incomprehensible argument. Moreover, in contrast to the

results of Experiment I, Experiment II showed that participants trusted more strongly in their

own agreement decision after reading causal compared to empirical support.

4. General Discussion

By presenting participants with texts of varying easiness, the present experiments investigated

whether laypeople are inclined to rely on their own evaluation of scientific claims. We had

expected that laypeople would more clearly and confidently accept information they consider

as easy compared to information that was difficult and complex. Thus, we predicted claim

agreement to be greater and arguments rated stronger when recipients read comprehensible

790

795

800

805

810

compared to incomprehensible arguments (hypotheses 1a/b), and causally supported

arguments compared to empirically supported ones (hypotheses 2a/b). Moreover we had

expected higher levels of trust in laypeople’s own decisions and conversely a weaker desire to

consult an expert after reading comprehensible compared to incomprehensible arguments

(hypotheses 3a/b) and causal compared to empirical support (hypotheses 4a/b).

4.1 Summary of present findings

Both experiments revealed that, as expected in hypotheses 1 a/b, laypeople were more

persuaded by information about scientific contents when the information they had received

was comprehensible than when it was incomprehensible. Moreover, in line with hypotheses

3a/b, after reading comprehensible arguments, laypeople showed higher levels of trust in their

own agreement decision and a decreased desire to ask an expert for decision support.

Findings with regards to type of argument support are less conclusive and only partly confirm

our expectations. We found that recipients were indeed able to differentiate between argument

support types, and they evaluated arguments containing causal support as stronger than

arguments containing empirical support (in line with hypothesis 2b). However, the influence

of support type did not extend to recipients’ claim agreement or to their desire to consult an

expert (contrary to hypotheses 2a and 4b). Finally, while our expectation that support type

would influence laypeople’s trust in their own decision (hypothesis 4a) was not confirmed in

Experiment I, it did receive support in Experiment II, where participants’ trust scores were

higher after reading causal than empirical support. The present finding of causal support being

perceived as stronger than empirical support is consistent with previous research showing that

laypersons do not evaluate arguments in the same way as experts, who prefer empirical

evidence and consider explanations unsubstantiated by data as generally weak or unsupported

(Kuhn, 1991; Slusher & Anderson, 1996). The results also corroborate Keil’s (2010)

70

815

820

825

830

835

proposition that reasoning in terms of causal explanation plays a central role in laypeople’s

folk science. An additional contribution of the present work was to ensure that the observed

preference for causal support is not due to a confounding with comprehensibility. A possible

confounding with argument comprehensibility might explain the mixed results obtained in

previous research where comprehensibility had not been controlled, with some studies

showing laypeople to favor causal support (e.g. Slusher & Anderson, 1996) and other research

finding a preference for empirical support (e.g. Brem & Rips, 2000). By controlling for

comprehensibility of both support types, the present findings show that laypeople have an

actual preference for causal over empirical support.

Furthermore, the results obtained regarding laypeople’s desire to ask an expert for support

suggest that laypeople are generally inclined to make use of the division of cognitive labor

when having to come to decisions about science-related issues. Even when receiving easy

texts, participants’ ratings of their desire to ask an expert did not average below 5 on a scale

from 1 to 7 (with 7 indicating a strong desire). Nevertheless, the decreasing influence of

information comprehensibility on the perceived need for expert advice suggests that an over-

simplification of scientific contents might mislead lay recipients to underestimate the

importance of relying on experts.

Overall, the results of both experiments show strong similarities, which indicates the

robustness of the obtained findings and moreover demonstrates that the observed effects are

not specific for the domain of medicine. However, compared to Experiment I, Experiment II

yielded stronger differences in laypeople’s decisions depending on the easiness of the

received information. This is in line with our expectation that the influence of perceived

easiness on laypeople’s reliance on their own content evaluations depends on their scientific

experience. The participants of Experiment II, who we assume to have been less aware of the

general complexity and tentativeness of scientific knowledge, appear to have more readily

840

845

850

855

860

formed the impression that being able to understand information qualified them to decide

about the persuasiveness of knowledge claims.

In sum, our results confirm the hypothesis that laypeople are more inclined to rely on their

own evaluations of scientific contents when they perceive the topic at hand as easy than when

they perceive the issue as beyond their own understanding. Moreover, it seems that though

comprehensibility has a strong influence on lay recipients’ impression of content easiness, the

impact of support type is comparatively small. Support type might have a stronger effect on

easiness perceptions among laypeople who are scientifically more naïve than the participants

we included in our experiments. The less familiar laypeople are with empirical evidence, and

thus with evidence that is not part of their naïve folk science, the more difficult they may

perceive its evaluation and the more inclined they might be to seek out the help of experts.

4.2 Limitations and future directions

It needs to be borne in mind that the present experiments were conducted on a student sample.

Thus, although our participants did not study subjects directly related to the knowledge claims

in question and had only just begun their higher education, their scientific experience was

presumably more extensive than that of the ‘average layperson’. In fact, this renders our

findings more remarkable, as it appears that even individuals who should to some degree be

aware of the general complexity and tentativeness of scientific knowledge are prone to

disregard their own epistemic limitations in light of perceived text easiness.

To ensure the lay status of our participants, we not only examined university students with

non-empirical majors (Experiment I), but restricted our selection to students who had only

just begun their higher education or were from a less research-oriented institution

(Experiment II). However, it should be noted that we assumed the participants of each

experiment to differ in their scientific experience due to their educational background.

865

870

875

880

885

Therefore, we cannot conclude with certainty that the differences in effect sizes between

experiments are indeed due to both samples varying in relevant experience. Future research

could further investigate the impact of scientific experience by assessing laypeople’s

epistemological beliefs, which might become more sophisticated as a function of scientific

training, and compare those individuals who have been confirmed to differ in their beliefs

about the nature of scientific knowledge (for information about the relationship between

educational experience and epistemological beliefs see Weinstock & Zviling-Beiser, 2009). In

this context, it would also be an interesting direction for future research to more generally

examine the connection between epistemological beliefs as they are traditionally

conceptualized in research (i.e. as beliefs regarding an individual’s own knowledge) and

individuals’ conceptions and utilization of the division of cognitive labor (which is related to

individuals’ beliefs about other people’s knowledge). While a comprehensive theoretical

elaboration of the relationship between epistemological beliefs in the traditional sense and

beliefs about the division of cognitive labor is beyond the scope of this article, the interested

reader is referred to Bromme et al. (2010).

Two further limitations concern the text materials applied in the present experiments. First,

comprehensibility of the arguments was varied in three ways: through technical jargon,

inclusion of unnecessary detail and repetition of important information. While this was done

to ensure a strong manipulation of comprehensibility, we cannot conclude with certainty

whether the observed effects are due to all three ways of comprehensibility manipulation or

whether they are attributable to only one or two of them. In future research, only one way of

manipulating comprehensibility at a time could be applied in order to disentangle their

respective contributions to the presently obtained effects. Second, the arguments used as

stimulus materials were written by one of the experimenters, who is not a medical or climate

expert. Although great care has been taken to write the texts in a style and wording

75

890

895

900

905

910

comparable to genuine medicine and climate-related science texts, it cannot be completely

ruled out that participants perceived the texts as less valid representations of science texts than

would have been the case had the texts been written by actual domain experts.

It also needs to be noted that we manipulated comprehensibility and type of argument

assuming that both factors influence perceived easiness of the provided information, but did

not explicitly assess laypeople’s impressions of content easiness. In order to more

conclusively establish the assumed role of perceived easiness as a mediating variable of the

observed effects, easiness perceptions should be measured directly in future experiments.

With regards to the specificity of our findings, we would like to point out that although our

experiments were only focused on laypeople, we do not propose that the observed effects are

unique to lay recipients. It is possible that experts or novices would show the same inclination

to more strongly and confidently accept an argument when the information is easy as the

laypeople in our samples. However, we suspect that laypeople may be especially prone to

overestimate their epistemic capabilities as a consequence of receiving an easy text.

Compared to laypeople, experts and novices should be more aware of the complexity of

scientific knowledge − especially about the topics they have studied laboriously in order to

become an expert. As a result, they might be less likely to perceive a topic as simple and easy

to evaluate only because a specific depiction of the topic is easy to understand. Whether this

assumption is accurate, however, or whether experts and novices are in fact equally inclined

to more readily accept easy information remains an open empirical question warranting

further research.

4.3 Educational implications

Our results have implications for science education as well as for the communication of

science knowledge to laypeople outside of the formal learning context. The finding of

915

920

925

930

935

laypeople tending to rely more strongly on their own evaluations of scientific contents if the

subject matter is presented in an easy way suggests a limited awareness of the general

complexity and tentativeness of scientific knowledge. Due to this limited awareness,

laypeople appear to be susceptible to belief that their own epistemic capabilities are sufficient

to evaluate the veracity of scientific claims, even though they actually possess little or no

relevant prior knowledge about the subject matter. This susceptibility may result from a lack

of appreciation for the need to rely on the division of cognitive labor. Even research and

literature on epistemological beliefs is guided by the idea that it is predominately

epistemologically naïve individuals who accept knowledge from authority, whereas

sophisticated individuals attempt to construct knowledge by themselves (but see Elby &

Hammer, 2001 for a criticism of this stance). This is reflected in the dimension ‘source of

knowledge’ of epistemological beliefs theories (e.g. Schommer, 1990), which derives from

the belief that knowledge originates from authority figures such as teachers or experts to the

belief that knowledge is the outcome of inquiry and personal construction (and this pole is

conceived as the most elaborated one).

The assumption that independence from external sources is a desirable developmental goal

might on the one hand go back to Piagetian influences on research programs on

epistemological beliefs. In the Piagetian tradition, cognitive development during childhood

has been conceived as being driven predominately by children’s own experiences with their

environment, whereas testimony offered by adults has been dismissed as “providing mere

'verbal' knowledge, rather than genuine understanding" (Harris, 2001, p. 495), denying any

educative role of such second-hand information (see Harris, 2001 for a criticism of this view).

On the other hand, the view of authority-independence as a desirable state might be

influenced by the normative ideal of students becoming citizens who make up their own

minds instead of deferring to authorities (Perry, 1979). Such a view implicitly suggests that

80

940

945

950

955

960

the optimal way for individuals to handle scientific issues is to overcome the division of

cognitive labor by evaluating information and deciding for themselves, on par with an expert.

This criticism does not apply to all epistemological beliefs researchers (see for example

Chinn, Buckland & Samarapungavan, 2011). King and Kitchener (1994) do not argue for a

rejection of reliance on expert knowledge either, but rather for a critical reflection about

experts’ knowledge claims. Nevertheless, even the developmental trajectory underlying their

work emphasizes a development towards cognitive autonomy which implies an independence

of one’s own ideas from external authorities and which might be not realistic. If this

normative ideal, held even by several scientists, is shared by laypeople, it may explain their

readiness to make decisions in situations when they should actually ask an expert for advice.

In order to prevent individuals from sharing this belief, it is important to increase awareness

of the specific nature of scientific knowledge, the evaluation of which usually requires years

of specialized training. In this context, laypeople should be sensitized to the central role of the

division of cognitive labor in modern societies. In school, as well as in the course of higher

education, students should learn that oftentimes it is not only acceptable but even desirable to

rely on other people rather than attempting to solve a problem on one’s own.

Our findings furthermore suggest that caution should be taken whenever scientific contents

are communicated to laypeople outside of the educational context. Popularized science

reports, i.e. science depictions especially intended for public consumption, are usually

characterized by simplification in order to facilitate the target audience’s content

understanding (Goldman & Bisanz, 2002; Zimmerman et al., 2001). However, the present

findings indicate that such a simplification includes the risk of making scientific knowledge

appear less complex and easier to evaluate than it actually is. Of course we do not intend to

propose that laypeople should not be informed about scientific contents in a way that allows

them to understand the central facts. However, we consider it important that popularized

965

970

975

980

985

science reports not only inform laypeople about scientific contents but also make recipients

aware of the fact that the content information presented is usually not sufficient to allow

confident evaluations of related knowledge claims that may influence their decision making.

5. References

Bekker, H., Thornton, J. G., Airey, C. M., Connelly, J. B., Hewison, J., Robinson, M. B., et al.

(1999). Informed decision making: An annotated bibliography and systematic review.

Health Technology Assessment, 3, 1-156.

Blair, J.A. & Johnson R.H. (1987). Argumentation as dialectical. Argumentation 1, 41-56.

doi:10.1007/BF00127118

Bradley, S. D. & Meeds, R. (2004). The effects of sentence-level context, prior word

knowledge, and need for cognition on information processing of technical language in

print ads. Journal of Consumer Psychology, 14, 291-302.

doi:10.1207/s15327663jcp1403_10

Brand-Gruwel, S. & Stadtler, M. (2011). Solving information-based problems: Evaluating

sources and information. Learning and Instruction, 21, 175-179.

doi:10.1016/j.learninstruc.2010.02.008

Brem, S. K. & Rips, L. J. (2000). Explanation and evidence in informal argument. Cognitive

Science, 24, 573-604. doi:10.1016/S0364-0213(00)00033-1

Bromme, R., Kienhues, D. & Porsch, T. (2010). Who knows what and who can we believe?

Epistemological beliefs are beliefs about knowledge (mostly) attained from others. In L.

D. Bendixen & F. C. Feucht (eds.), Personal Epistemology in the Classroom: Theory,

Research, and Implications for Practice, pp. 163-193, Cambridge: Cambridge University

Press. doi:10.1017/CBO9780511691904.006

990

995

1000

1005

1010

Bromme, R., Rambow, R. & Nückles, M. (2001). Expertise and estimating what other people

know: The influence of professional experience and type of knowledge. Journal of

Experimental Psychology: Applied, 7, 317-330. doi:10.1037//1076-898X.7.4.317-330

Charles, G., Gafni, A. & Whelan, T. (1997). Shared decision-making in the medical

encounter: What does it mean? Social Science and Medicine, 44, 681-692.

doi:10.1016/S0277-9536(96)00221-3

Chinn, C., Buckland, L. & Samarapungavan, A. (2011). Expanding the dimensions of

Epistemic Cognition: Arguments from philosophy and psychology, Educational

Psychologist, 46, 141-167. doi:org/10.1080/00461520.2011.587722

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd edition).

Hillsdale, NJ: Erlbaum.

Eagly, A.H. (1974). Comprehensibility of persuasive arguments as a determinant of opinion

change. Journal of Personality and Social Psychology, 29, 758-773.

doi:10.1037/h0036202

Elby, A. & Hammer, D. (2001). On the substance of a sophisticated epistemology. Science

Education, 85, 554-567. doi:10.1002/sce.1023

Erb, H.-P., Bohner, G., Rank, S. & Einwiller, S. (2002). Processing minority and majority

communications: The role of conflict with prior attitudes. Personality and Social

Psychology Bulletin, 28, 1172-1182. doi:10.1177/01461672022812003

Faul, F., Erdfelder, E., Lang, A.-G. & Buchner, A. (2007). G*Power 3: A flexible statistical

power analysis program for the social, behavioral, and biomedical sciences. Behavior

Research Methods, 39, 175-191. doi:10.3758/BF03193146

Fox, S. (2005). Health information online. Washington, DC: Pew Internet & American Life

Project.

85

1015

1020

1025

1030

1035

Glassner, A., Weinstock, M. & Neuman, Y. (2005). Pupils' evaluation and generation of

evidence and explanation in argumentation. British Journal of Educational Psychology,

75, 105-118. doi:10.1348/000709904X22278

Glenberg, A.M. & Epstein, W. (1985). Calibration of comprehension. Journal of

Experimental Psychology: Learning, Memory, and Cognition, 11, 702-718.

doi:10.1037//0278-7393.11.1-4.702

Goldman, S.R. (2011). Commentary: Choosing and using multiple information sources: Some

new findings and emergent issues. Learning and Instruction, 21, 238-242.

doi:10.1016/j.learninstruc.2010.02.006

Goldman, S.R. & Bisanz, G.L. (2002). Toward functional analysis of scientific genres:

Implications for understanding and learning processes. In J. Otero, J.A. Leon, & A.C.

Graesser (eds.), The psychology of science text comprehension, pp. 19-50, Mahwah NJ:

Erlbaum.

Gopnik A. (2000). Explanation as orgasm and the drive for causal knowledge: The function,

evolution, and phenomenology of the theory formation system. In F. C. Keil & R. A.

Wilson (eds.), Explanation and cognition, pp. 299-323, Cambridge, MA: MIT Press.

Harris, P.L. (2001). Thinking about the unknown. Trends in Cognitive Sciences, 5, 494-498.

doi:10.1016/S1364-6613(00)01789-7

Jucks, R. (2001). Was verstehen Laien? Die Verständlichkeit von Fachtexten aus der Sicht

von. Computer-Experten. Münster: Waxmann.

Kajanne, A. & Pirttilä-Backman, A.-M. (1999). Laypeople's viewpoints about the reasons for

expert controversy regarding food additives. Public Understanding of Science, 8, 303-

315. doi:10.1088/0963-6625/8/4/303

Keehner, M. & Fischer, M.H. (2011). Naive realism in public perceptions of neuroimages.

Nature Reviews Neuroscience, 12, 118. doi:10.1038/nrn2773-c1

1040

1045

1050

1055

1060

Keil, F.C. (2003). Categorisation, causation, and the limits of understanding. Language and

Cognitive Processes, 18, 663-692. doi:10.1080/01690960344000062

Keil, F. C. (2006). Explanation and understanding. Annual Review of Psychology, 57, 227-

254. doi:10.1146/annurev.psych.57.102904.190100

Keil, F.C. (2008) Getting to the truth: Grounding incomplete knowledge. Brooklyn Law

Review, 73(3), 1035-1052.

Keil, F.C. (2010). The feasibility of folk science. Cognitive Science, 34, 826-862.

doi:10.1111/j.1551-6709.2010.01108.x

Keil, F.C., Stein, C., Webb, L., Billings, V. D. & Rozenblit, L. (2008). Discerning the

division of cognitive labor: An emerging understanding of how knowledge is clustered in

other minds. Cognitive Science, 32, 259–300. doi:10.1080/03640210701863339

Kienhues, D., Stadtler, M. & Bromme, R. (2011). Dealing with conflicting or consistent

medical information on the Web: When expert information breeds laypersons’ doubts

about experts. Learning and Instruction, 21, 193-204.

doi:10.1016/j.learninstruc.2010.02.004

King, P.M. & Kitchener, K.S. (1994). Developing reflective judgment: Understanding and

promoting intellectual growth and critical thinking in adolescents and adults. San

Francisco: Jossey-Bass.

Koslowski, B. (1996). Theory and evidence: The Development of Scientific Reasoning.

Cambridge, MA: MIT Press.

Kuhn, D. (1991). The skills of argument. Cambridge: Cambridge University Press.

Lombrozo, T. (2006). The structure and function of explanations. Trends in Cognitive

Sciences, 10, 464-470. doi:10.1016/j.tics.2006.08.004

Mason, L., Ariasi, N., & Boldrin, A. (2011). Epistemic beliefs in action: Spontaneous

reflections about knowledge and knowing during online information searching and their

90

1065

1070

1075

1080

1085

influence on learning. Learning and Instruction, 21, 137-151.

doi:10.1016/j.learninstruc.2010.01.001

Miller, N., Maruyama, G., Beaber, R.J. & Valone, K. (1976). Speed of speech and persuasion.

Journal of Personality and Social Psychology, 34, 615-624. doi:10.1037//0022-

3514.34.4.615

Murphy, P.K., Long, J.F., Hollerana, T.A. & Esterly, E. (2003). Persuasion online or on

paper: A new take on an old issue. Learning and Instruction, 13, 511-532.

doi:10.1016/S0959-4752(02)00041-5

Nagy, W. (1988). Teaching vocabulary to improve reading comprehension. Newark:

International Reading Association.

O’Keefe, D. J. (2002). Persuasion: Theory and research. Thousand Oaks, CA: Sage.

Perry, W.G. (1970). Forms of intellectual and ethical development in the college years: A

scheme. San Francisco: Jossey-Bass.

Pieschl, S. (2009). Metacognitive calibration - an extended conceptualization and potential

applications. Metacognition and Learning, 4, 3-31. doi:10.1007/s11409-008-9030-4

Putnam, H. (1975). The meaning of ‘meaning’. In K. Gunderson (ed.) Language, Mind and

Knowledge, Minnesota Studies in the Philosophy of Science, VII, pp. 215-271,

Cambridge: University Press. doi:10.1017/CBO9780511625251.014

Reber, R. & Schwarz, N. (1999). Effects of perceptual fluency on judgments of truth.

Consciousness and Cognition, 8, 338-342. doi:10.1006/ccog.1999.0386

Sandoval, W.A. & Çam, A. (2011). Elementary children's judgments of causal justifications.

Science Education. 95, 383-408. doi:10.1002/sce.20426

Schommer, M. (1990). Effects of beliefs about the nature of knowledge on comprehension.

Journal of Educational Psychology, 82, 498-504. doi:10.1037//0022-0663.82.3.498

1090

1095

1100

1105

1110

Schwarz, N. (2004). Meta-cognitive experiences in consumer judgment and decision making.

Journal of Consumer Psychology, 14, 332-348. doi:10.1207/s15327663jcp1404_2

Slusher, M.P. & Anderson, C.A. (1996). Using causal persuasive arguments to change beliefs

and teach new information: The mediating role of explanation availability and evaluation

bias in the acceptance of knowledge. Journal of Educational Psychology, 88, 110-122.

doi:10.1037//0022-0663.88.1.110

Stadtler, M. & Bromme, R. (2008). Effects of the metacognitive tool met.a.ware on the web

search of laypersons. Computers in Human Behavior, 24, 716-737.

doi:10.1016/j.chb.2007.01.023

Stehr, N. (1994). Knowledge Societies. London: Sage.

Trabasso, T., Secco, T. & van den Broek, P.W. (1984). Causal cohesion and story coherence.

In: H. Mandl, N. L. Stein & T. Trabasso (eds.), Learning and comprehension of text, pp.

83-111, Hillsdale, NJ: Erlbaum.

Trout J. D. (2002). Scientific explanation and the sense of understanding. Philosophy of

Science, 69(2), 212-233. doi:10.1086/341050

Tyler, A. (1994). The role of repetition in perceptions of discourse coherence. Journal of

Pragmatics, 21, 671-688. doi:10.1016/0378-2166(94)90103-1

Wagner, W., Elejabarrieta, F. & Lahnsteiner, I. (1995). How the sperm dominates the ovum –

Objectification by metaphor in the social representation of conception. European Journal

of Social Psychology, 25, 671-688. doi:10.1002/ejsp.2420250606

Weinstock, M. & Zviling-Beiser, H. (2009). Separating academic and social experience as

potential factors in epistemological development. Learning and Instruction, 19, 287-298.

doi:10.1016/j.learninstruc.2008.05.004

Wiley, J., Goldman, S.R., Graesser, A.C., Sanchez, C.A., Ash, I.K. & Hemmerich, J.A.

(2009). Source evaluation, comprehension, and learning in Internet science inquiry tasks.

1115

1120

1125

1130

1135

American Educational Research Journal, 46, 1060-1106.

doi:10.3102/0002831209333183

Wiley, J., Griffin, T.D. & Thiede, K.W. (2005). Putting the comprehension in

metacomprehension. Journal of General Psychology, 132, 408-428.

doi:10.3200/GENP.132.4.408-428

Zimmerman, C., Bisanz, G.L., Bisanz, J., Klein, J.S. & Klein, P. (2001). Science at the

supermarket: A comparison of what appears in the popular press, experts’ advice to

readers, and what students want to know. Public Understanding of Science, 10, 37-58.

doi:10.1088/0963-6625/10/1/303

95

1140

Table 1

Means and standard deviations (in brackets) for the dependent measures of Experiment I perceived

comprehensibility, perceived argument strength, agreement with the claim prior and subsequent to reading the

argument, trust in own agreement decision, and desire to consult an expert as a function of comprehensibility

and type of argument support. All scales ranged from 1 to 7 (with 7 meaning higher scores on the dimension).

Argument

condition

Compre-

hensibility

Argument

strength

Claim

agreement

Trust in

own

decision

Desire for

expert

advicepre post

Compr.

causal

6.07

(1.10)

5.85

(1.28)

3.99

(1.08)

5.14

(1.22)

4.22

(1.87)

5.72

(1.63)

Incompr.

causal

2.14

(1.36)

3.93

(1.59)

3.94

(1.01)

4.93

(1.16)

3.63

(2.03)

6.16

(1.29)

Compr.

empirical

6.14

(1.14)

5.16

(1.56)

3.94

(0.99)

5.02

(1.15)

4.03

(1.85)

6.00

(1.36)

Incompr.

empirical

1.92

(1.24)

3.50

(1.67)

4.08

(0.91)

4.78

(1.26)

3.34

(1.93)

6.25

(1.25)

1145

1150

Table 2

Means and standard deviations (in brackets) for the dependent measures of Experiment II perceived

comprehensibility, perceived argument strength, agreement with the claim prior and subsequent to reading the

argument, trust in own agreement decision, and desire to consult an expert as a function of comprehensibility

and type of argument support. All scales ranged from 1 to 7 (with 7 meaning higher scores on the dimension).

Argument

condition

Compre-

hensibility

Argument

strength

Claim

agreement

Trust in

own

decision

Desire for

expert

advicepre post

Compr.

causal

5.86

(1.35)

5.82

(1.14)

4.11

(1.38)

5.38

(1.36)

4.58

(1.77)

5.11

(1.79)

Incompr.

causal

2.10

(1.45)

3.85

(1.74)

4.16

(1.36)

4.80

(1.46)

3.84

(1.90)

5.45

(1.79)

Compr.

empirical

5.56

(1.20)

5.33

(1.30)

3.92

(1.42)

5.05

(1.55)

4.50

(1.77)

5.11

(1.76)

Incompr.

empirical

2.11

(1.43)

3.38

(1.70)

4.27

(1,40)

4.89

(1.39)

3.13

(1.84)

5.82

(1.39)

100

1155