21
Policy Sciences 24: 99-119, 1991. 1991 KluwerAcademic Publishers. Printed in the Netherlands. Learning through conflict: a realistic strategy for risk communication j PAUL C. STERN National Research Council, Washington, DC 20418, U.S.A Abstract. Technological conflictsare commonlyseen as rooted in problems of risk perception and risk communication. This view is scriouslydeficicntin that it does not fully appreciate that despite their technical content, the conflicts are at bottom political. Conflict of interests and values is evident even in differences over scientific agendas, methods, and interpretations, mid especially in the inevitable cacophony of messages describing scientific knowledge to non- experts. Efforts to produce clear, accurate, and unbiased messages about risks will not even solve communication problems, let alone reduce conflict, because 'unbiased' is undefinable. Clear and accurate messagescan alwaysbe devised to support a varietyof policypositions, and they will be, whenever controversypersists. The article substantiates these points and outlines some realistic approaches to risk communicationthat enable nonexperts to learn through con- flict what they cannot learn from carefullycrafted risk messages. Over the past two decades, political debates about technological choices have frustrated many of the participants in them. Persistent acrimonious conflicts over nuclear power, hazardous chemicals, food additives, and the like have not been resolved either by accumulations of scientific evidence or expres- sions of public sentimcnt. In many quarters, it has bccomc fashionable to de- fine the problem in terms of perception and communication. Many veterans of technological controversies believe that if people perceived the likely costs and benefits of the alternatives more accurately or if scientists, government agencies, and the mass media communicated more effectively about the risks, conflicts about the attendant choices would be easier to resolve and the soci- ety would be better for it. 2 There are at least four serious deficiencies in this formulation. The first is a failure to appreciate fully that the conflicts are embedded in a democratic sys- tem. Technological decisions must be scientifically informed because there may be options that almost all citizens would reject if enough science were known to foresee the consequences. Nevertheless, the ultimate decision power is vested in officials answerable to the electorate. Democracy presumes that the political system embodies greater wisdom than any specialized com- munity, even that of scientists. Democratic principles are inconsistent with a view that would give scientists or any other select group the right to determine how citizens should perceive their policy choices. The second deficiency of the fashionable view is that it treats the nature of knowledge as politically unproblematic. It assumes that risks can be assessed and the assessments explained in a value-free and politically neutral manner. lqut as the next section explains, political issues are at stake even in setting

Learning through conflict: a realistic strategy for risk communication

Embed Size (px)

Citation preview

Page 1: Learning through conflict: a realistic strategy for risk communication

Policy Sciences 24: 99-119, 1991. �9 1991 KluwerAcademic Publishers. Printed in the Netherlands.

Learning through conflict: a realistic strategy for risk communication j

PAUL C. STERN National Research Council, Washington, DC 20418, U.S.A

Abstract. Technological conflicts are commonly seen as rooted in problems of risk perception and risk communication. This view is scriously deficicnt in that it does not fully appreciate that despite their technical content, the conflicts are at bottom political. Conflict of interests and values is evident even in differences over scientific agendas, methods, and interpretations, mid especially in the inevitable cacophony of messages describing scientific knowledge to non- experts. Efforts to produce clear, accurate, and unbiased messages about risks will not even solve communication problems, let alone reduce conflict, because 'unbiased' is undefinable. Clear and accurate messages can always be devised to support a variety of policy positions, and they will be, whenever controversy persists. The article substantiates these points and outlines some realistic approaches to risk communication that enable nonexperts to learn through con- flict what they cannot learn from carefully crafted risk messages.

Over the past two decades, political debates about technological choices have frustrated many of the participants in them. Persistent acrimonious conflicts over nuclear power, hazardous chemicals, food additives, and the like have not been resolved either by accumulations of scientific evidence or expres- sions of public sentimcnt. In many quarters, it has bccomc fashionable to de- fine the problem in terms of perception and communication. Many veterans of technological controversies believe that if people perceived the likely costs and benefits of the alternatives more accurately or if scientists, government agencies, and the mass media communicated more effectively about the risks, conflicts about the attendant choices would be easier to resolve and the soci- ety would be better for it. 2

There are at least four serious deficiencies in this formulation. The first is a failure to appreciate fully that the conflicts are embedded in a democratic sys- tem. Technological decisions must be scientifically informed because there may be options that almost all citizens would reject if enough science were known to foresee the consequences. Nevertheless, the ultimate decision power is vested in officials answerable to the electorate. Democracy presumes that the political system embodies greater wisdom than any specialized com- munity, even that of scientists. Democratic principles are inconsistent with a view that would give scientists or any other select group the right to determine how citizens should perceive their policy choices.

The second deficiency of the fashionable view is that it treats the nature of knowledge as politically unproblematic. It assumes that risks can be assessed and the assessments explained in a value-free and politically neutral manner. lqut as the next section explains, political issues are at stake even in setting

Page 2: Learning through conflict: a realistic strategy for risk communication

100

research agendas and selecting laboratory procedures. It is unsafe to assume that the body of scientific knowledge concerning a technological controversy will be accepted as unbiased.

The third deficiency is the mistaken assumption that net risks to human life and health are the only important issue in technological disputes (see Otway and yon Winterfeldt, 1982; von Winteffeldt and Edwards, 1984). Among the other major sources of conflict over technology are differing impacts of the costs and benefits on different social groups; value differences (such as about the wisdom of risk taking; the acceptability of short-term versus long-term risks or benefits; and the relative importance of fiver vistas, small towns, or endangered species as against income or employment); and mistrust in expert judgment. Experts on technological risk issues agree that all these sources of conflict are of major importance, but there is no majority view on which is most central (Dietz, Stem, and Rycroft, 1989). Risk perception is not the whole problem, so risk communication, if this simply means transmitting scientific knowledge about risk to nonexperts, is unlikely to be the solution. A focus solely on risks diverts attention from other critically important sources of conflict that science cannot decide.

The fourth deficiency is an implicit misunderstanding of the nature of communication. To many writers, "risk communication" means the delivery of messages characterizing expert knowledge to a nonexpert audience. But there is much more to the eomunication process in technological debates than the content of these risk messages. The process also involves messages about risk between nonexperts and from nonexperts to experts, as well as messages "that express concerns, opinions, or reactions to risk messages or to legal and institutional arrangements" for making decisions about risk (National Re- search Council, 1989). In short, risk communication is not information trans- fer, but a type of political discourse. A viewpoint that implies that only the risks matter and only messages from experts to nonexperts are important is unrealistic because access to communication channels cannot be restricted to those carrying approved messages. Attempts to do so will also cause resent- ment among nonexperts who want to exercise their legitimate political role, and especially among those who mistrust expert judgment.

It is dangerous to misunderstand risk communication because effective decision making about risks depends on it. Nonexperts need to gain under- standing of expert knowledge, and the problems of delivering this knowledge are extreme. Some of the problems are well known. It is not always obvious which facts ought to be presented because when a technological choice is controversial, knowledge about the attendant risks is typically uncertain and disputed. In addition, most citizens have difficulty understanding what scien- tists mean when they describe low-probability events or estimate the uncer- tainty of their own knowledge. But there is a deeper and less commonly ack- nowledged problem. Conflicting messages are inevitable in technological con- troversies, and would be, even if scientists agreed about what is known. This article explains why conflict between risk messages cannot be avoided and the

Page 3: Learning through conflict: a realistic strategy for risk communication

101

problems this state of affairs presents for improving nonexperts' understan- ding. It shows why improving the design of risk messages is an inadequate and unrealistic strategy for the purpose. Finally, it outlines two approaches that harness the inevitable conflict to give nonexperts a basis for making a balan- ced interpretation of available knowledge. These approaches aim to enable nonexperts to learn through conflict what they cannot learn from carefully crafted risk messages.

The influence of conflict on knowledge

The scientific study of risk issues is political in its effects. That is, scientific information can affect the distribution in the society of power and material resources. A claim that a particular chemical causes cancer in laboratory animals - or a claim that no such effect is known - can affect the market sha- res of corporations, the jobs of industrial workers, the schedules of courts, debates in Congress, and the research and enforcement priorities of federal agencies. Because of such effects, political and economic interests and social movement groups often produce knowledge about technology-related hazards and participate in scientific debates. When opposing groups can sponsor scientific analyses, scientists will produce findings that at least appear to conflict; when opposing political actors sponsor assessments of existing knowledge, assessments will conflict. Political conflict is embodied in science through choices about research agendas, methodologies, and the assessment of knowledge.

Research agendas

Because research agendas determine which hazards are most likely to be found and measured, they can help bring about regulation or demands for it, changes in consumer behavior, and liability lawsuits. Different research agen- das are likely to benefit different groups. For instance, research on genetic engineering of disease-resistant crops tends to benefit major seed and chemi- cal companies, but research on the environmental costs of monoculture may cost them. Similarly, efforts to monitor environmental pollutants and identi~ their sources tend to advance the political agendas of environmental groups and obstruct the interests of major producers of those pollutants; research into the economic costs of pollution control often has the opposite effect.

Because research agendas can shape political agendas and social decisions, the shape of a developing body of science - the balance between what it emphasizes and neglects - has political effects, even if political pressures were not the cause. Similarly, technically competent research, which can be done within any of numerous research agendas, can be political by reason of its effects. Thus, good research is not the same as unbiased science. When sci-

Page 4: Learning through conflict: a realistic strategy for risk communication

102

ence becomes part of a political controversy, research is criticized publicly for the questions it does not address as well as for the quality of its methods and the validity of its conclusions.

Research methods

Research methods can also help to generate or forestall pressure for regu- lation. Choices of equipment, laboratory procedures, and mathematical models may make the difference between a conclusion that a product or acti- vity is hazardous and a conclusion that it is not, or that no hazard has been found.

The development of more sensitive monitoring equipment for chemicals illustrates the point for laboratory equipment. Technological advances have made possible detection of chemical concentrations of 10 -0 where earlier only concentrations of 10 -6 could be detected (Lowrance, 1976). One effect has been to activate more frequently those laws and procedures that take effect once a risk is shown to exist; a secondary result has been pressure to make those laws and procedures less absolute. Increasing detection ability can also bring about pressure for political action by qualitatively changing public perceptions: what was once seen as safe comes to be seen as dange- rous, and social pressure for action appears where there was none before? More sensitive equipment makes it far more likely that a scientist will truthful- ly report, for instance, that dioxin ('the most hazardous substance known to man,' journalists often add) has been detected in a sample of ground water. Most people have difficulty discriminating between different very low proba- bilities (Lichtenstein et al., 1978), so what may make the greatest impression is not the probability value, but whether a hazard is present or absent.

Choices of laboratory tests can have similar effects. Quick tests of carcino- genetic potential based on mutagenicity in bacteria (Ames, 1979) make it relatively easy and inexpensive to identify some chemicals as carcionogens and thus to trigger public concern. This situation considerably raises the sta- kes in scientific debates about the relevance of tests on microorganisms to human carcinogenicity, bringing into the controversy any group with a stake in a substance that fails an Ames test and all the scientific resources that group can command.

Choices about mathematical models of risk also have the potential to alter policy conclusions. For instance, estimates made by U.S, government agencies of the carcinogenetic potency of tetrachlorodibenzo-p-dioxin have varied by a factor of more than 1013 depending on the agencies' choices of which model to use for extrapolating from high to low doses (Anderson, 1988). Other technical features of models have less dramatic effects on risk estimates, but can still produce many-fold differences in them. These include the choices of whether to extrapolate from animals to humans on the basis of body weight or surface area, what rate of metabolic conversion to assume for toxic sub-

Page 5: Learning through conflict: a realistic strategy for risk communication

103

stances, and which organ to examine for tumors in test animals (Anderson, 1988).

Technical choices can also affect who bears risks. For instance, in com- paring immediate health and safety risks, such as occupational accidents, with delayed ones, such as cancers, risk analysts may discount the delayed risks to reflect, among other things, the possibility that cancer prevention and treat- ment will improve before the projected cancers occur. The appropriate dis- count rate must be guessed because it depends on the rate of future progress in medicine, the likelihood that technology can solve future problems it may create, and the value placed on technological risk-taking. Yet basing policy on a discount rate determines who bears the risk. Policies based on high discount rates protect people who face intmediate hazards, whereas policies based on low discount rates protect people who are exposed to delayed hazards. The choice to quantify' deaths either as lives lost or life-years lost has a similar effect because the first measure gives more weight to cancers affecting the old than does the second. Because of the political effects of technical decisions, interested groups can be expected to argue for quantifying in ways that high- light the risks to which they are exposed or obscure the risks for which they may be blamed.

Assessment of scientific knowledge

The meaning inferred from existing knowledge has obvious political implica- tions. For instance, a conclusion that scientific knowledge is incomplete pre- serves the policy status quo. For registering a drug in the United States, where a series of tests must be met before approval, such a decision prevents regi- stration and therefore costs the drug company, hurts potential users who might have benefited, and helps those the drug might have harmed. For regu- lating industrial chemicals, where the policy regime does not require prior testing, a determination of incomplete knowledge has the reverse effects. It benefits producers who can go quickly to market while it distribmes unknown costs and benefits to production workers, consumers, and others. For such reasons, political actors commonly participate in the scientific debate about whether available evidence is adequate for decision. The most familiar exam- ple, of course, is the perennial dispute over the health effects of cigarette smo- king, featuring arguments by the Tobacco Institute that a causal relationship has not yet been demonstrated.

Political actors also debate about the practical meaning of accepted scien- tific findings. An obvious example is the problem of inference from studies of laboratory animals' responses to high doses of chemicals or radiation to human responses to the much lower doses received under conditions of living and working. As already noted, differences between reasonable scientists about the shape of the curve to use for extrapolation can lead to risk estimates varying by many- orders of magnitude (also see Whittemore, 1983); some

Page 6: Learning through conflict: a realistic strategy for risk communication

104

researchers have concluded that the uncertainty is so great that absolute risk estimates cannot be justified (Ames, Magaw, and Gold, 1987). Obviously, different judgments about the risks of particular levels of exposure can have vastly different implications for regulatory, legislative, and judicial action, so political actors have an incentive to develop scientific arguments that favor their interests.

Implications for risk analysis

If political agendas help shape the environment in which knowledge about risk is produced, does politics poison that environment? To put the question another way, the climate of political conflict makes adversarial science inevi- table: Can knowledge advance in such a climate?

Ii1 fact, opposing political agendas can be good for science. Here is an example from the politics of risk. In 1979, a consultant to the nuclear energy industry published a study in Science, concluding that nonconventional ener- gy sources are more risky than nuclear and other conventional sources of energy (Inhaber, 1979). The study, which was widely publicized by the nuclear industry, was immediately attacked by scientists who held opposing views on the the risks of nuclear power (e.g., Holdren, Smith, and Morris, 1979), largely on grounds of inappropriate assumptions and calculations. The critics pointed out, for example, that the original study assumed all "non- conventional" energy systems would include a complete back-up system of coal-fired electricity with all its attendant risks: it assumed, in effect, that all such systems would be superfluous. The original researcher's political agenda may have distorted the research in this instance, but researchers with other biases published criticism, and the conflict over method arguably advanced knowledge.

This is not an isolated instance. Imperfect science and rancorous disputes have been common at least sincc the early conflicts between science and reli- gion over the heliocentric theory of the universe and the study of human anat- omy. More recent scientific history includes a century of conflict, deeply rooted in political and social preconceptions, over nature versus nurture as the main cause of human individual differences (Gould, 1981). Contrary to popular myth, scientists are not, as a rule, dispassionate seekers after truth. Their passions can blind them to the weaknesses of their own methods and arguments and to the strengths of other views. When science advances de- spite these human limitations, it does so by the operation of a set of norms that allow the scientific community to uncover individual errors and produce a better collective understanding than individual researchers can develop.

The relevant scientific norms are widely known, but I review them here because I believe they regulate a process of learning through conflict that holds some lessons for risk communication. Scientific data are not taken seriously until they appear in open publications. Draft scientific reports are

Page 7: Learning through conflict: a realistic strategy for risk communication

105

reviewed by experts, who normally include theoretical opponents, before publication. They are not published unless they provide enough procedural detail to allow a skeptic to replicate the study. Failures to replicate are sup- posed to be published in the open literature as well (although in practice it is more difficult to publish a non-finding than a finding). In short, scientific norms ideally harness conflict to expose and correct inaccurate methods, weak reasoning, and fraud.

I do not mean to suggest that the system of arriving at truth through dialo- gue and dispute is infallible, nor that it can be relied upon to banish error in the short term. Widely shared presuppositions sometimes make major lines of investigation unthinkable to whole research communities for long periods of time. Numerous examples of long-standing scientific bias may be cited, inclu- ding the assumption that social development could be understood from the study of males only (Gilligan, 1982), the long-untested assumption that visce- ra cannot learn from experience (Miller, 1969), the assumption that genetic information was 'strictly contained in the autonomous gene' (Keller, 1983: 170), the failure of most scientists before the 1960s to consider ways human action might affect whole ecosystems, and the continuing failure of risk ana- lysts to consider carefully human actors as part of 'technological' systems (Freudenburg, 1988). Some biases are reinforced by the social and economic structure of science, which favors producing information on the benefits of transforming and manipulating nature over information on the costs (Schnai- berg, 1980; Dickson, 1984). Thus, even collective efforts of large scientific communities are not immune from systematic omissions or biases. Neverthe- less, the present system of controlled dispute probably serves truth better than a system more restrictive of open expressions of conflict. Knowledge can advance despite conflict, and even because of it, in a scientific community that recognizes that each of many lines of research, research methods, theoretical models, and operational definitions may shed some light on the questions at issue, that encourages sufficient competition of ideas, and that uses publication and debate over competing analyses to give a more complete picture.

How well the competition of ideas advances knowledge about technologi- cal risks is open to debate. Effective competition of ideas depends on a wide range of viewpoints being well represented by competent research, in order to reveal important errors in received wisdom. As Lester Lave has put it, 'It]he process of conducting and defending risk analyses ... must be opened to new data and models.' (Lave, 1987: 294) A potential problem, therefore, is that resources for conducting and disputing risk analyses are distributed quite unequally across the relevant viewpoints and interests. A recent survey of people professionally involved in commissioning, conducting, arguing over, and acting on risk assessments found that of those employed by environmen- tal organizations, only 8% had graduate degrees in the physical sciences, engineering, mathematics, or statistics, compared to proportions varying from 1/6 to 1/3 in the other types of organizations participating in risk de-

Page 8: Learning through conflict: a realistic strategy for risk communication

106

bates (Dietz and Rycroft, 1987). In principle, government agencies act as well-financed independent observers operating in the public interest, but they may be biased by the disciplinary backgrounds of their scientists, the partisan agendas of politicians, an insufficiency of resources to pursue certain lines of research, or the influence of outside interest groups. Documented charges of such bias have become frequent in recent years (e.g., Smith, 1983; Sun, 1984a, b; Marshall, 1987). Therefore, it is necessary at times to rely on analy- ses from outside the government to ensure that the right questions are addres- sed. Although it is difficult to redress an imbalance of resources, the appro- priate strategy for building scientific knowledge is clear.

The influence of conflict on risk communication

In a commonly held view, the essence of risk communication is devising mes- sages the nonexpert audience will understand while remaining true to the content as the experts understand it. Underlying this view is the belief that all good risk messages say essentially the same thing - that a carefully developed strategy for designing messages can present to nonexperts a single, unbiased characterization of available knowledge. This section explains why this is an unrealistic goal. It shows why the concept of "unbiased message" is undefina- ble, how accurate messages can conflict, and why the potential to use risk messages to engender misunderstanding is inherent in the communication process.

What better risk messages can accomplish

It is possible, though not easy, to design clear, comprehensible, and nontech- nical accounts of knowledge about risk. The difficulty is that such knowledge is complex and uncertain, so that messages about it can easily confuse an audience or convey an unintended message. For instance, many people have trouble understanding and interpreting information about low-probability events. So, some individuals may understand 'about as dangerous as a chest X-ray' to signify a serious risk, even if the message designer intended to con- vey that the risk was minor. Many people have trouble understanding the sig- nificance of scientific uncertainty. So, some individuals interpret any indica- tion of expert disagreement as meaning that the risks are essentially unknown. In addition to these problems of misunderstanding, the complexity of risk issues makes it likely that important segments of intended audiences will sim- ply ignore messages addressed to them.

Such serious communication problems can be addressed by building on a base of knowledge about the processes of communication, cognition, and attention. Research on these topics has already suggested some general guide- lines for the design of clear, comprehensible risk messages. For instance, some

Page 9: Learning through conflict: a realistic strategy for risk communication

107

researchers advise that communication should be two-way because someone who wishes to inform an audience about risks needs first to understand what that audience wants information about. They also advise that risk com- parisons be between risks perceived to have similar qualitative characteristics. This advice is supported by an impressive body of research (e.g., Gould et al., 1988; Hohenemser, Kates, and Slovic, 1983; Slovic, 1987), and additional research can make it more specific (Covello, Sandman, and Slovic, 1988). A developing body of knowledge on the use of graphs and tables can help mes- sage designers make quantitative information clearer (e.g., Cleveland, 1986; Rosenthal and Rubin, 1982). And other general advice for effective message design can be developed through new empirical research. For instance, stu- dies of particular audience groups could easily specify what it means to use risk comparisons that are relevant to the target audience or to present risks in language the audience can understand.

Why better risk messages cannot resolve conflict

Although making risk messages clearer and more comprehensible makes them more effective, it does not make them more neutral or unbiased. In a climate of conflict, more effective risk messages are more effective politically as well. Therefore, to the extent techniques are developed for making risk messages more effective, political actors will be tempted to use these techni- ques for their own ends. In this way, improved techniques could lend advan- tage to the political actors, typically corporations, that have the most abun- dant resources for using them. Those actors could attract a more attentive audience for their interpretations of knowledge and thus gain advantage in policy debates.

It may seem that the political use of risk message technology, can be mini- mized by ensuring that risk messages are accurate and unbiased, but this goal is unattainable. If all risk messages accurately reflected available knowledge, they would still engender intense conflict because there are different wa;ys to tell the same truth that create different impressions on audiences. It is there- fore impossible to determine which among the many possible accurate mes- sages are unbiased. The problem lies in the fact that messages do more than simply convey information. They highlight whatever information they pre- sent, drawing attention away from other aspects of the issue at hand, and they give meaning to information. Because a risk message highlights information, it is less a mirror of scientific knowledge than a telescope: it makes the knowl- edge much clearer, but we see only part of it. Because it gives meaning, it pro- vides less a photograph of reality than a portrait, or sometimes a caricature.

Highlighting. Ifike selecting a research agenda, highlighting has political ef- fects because it focuses attention. Because risk messages for nonexperts involve simplifying and selecting from existing knowledge, they entail choices

Page 10: Learning through conflict: a realistic strategy for risk communication

108

about which information to present, choices that have the potential to influ- ence audience judgments. Numerous psychological studies show how high- lighting information, or making it more 'available,' affects decision making (Tversky and Kahneman, 1973, Kahneman, Slovic, mad Tversky, 1982), and activists on all sides intuitively understand the principle. Message designers with different interests can be expected to make different simplifications, according to their judgments of what emphasis best advances their causes.

For instance, political actors assume that messages that identify the major risk-bearers are more likely to arouse them to political action than those that present average risks in a large population; that messages that omit uncertain- ty in favor of best-guess point estimates are less likely to incense those who consider some of the possible outcomes totally unacceptable; and that mes- sages that emphasize the costs of reducing a risk are more likely to produce support for a controversial technology than messages that emphasize the costs of not reducing the risk. Message designers must decide whether to report on probable illnesses as well as probable deaths. Excluding illnesses for the sake of simplicity is bound to change the way some people understand the situation. They must choose whether to mention possible effects on eco- systems, possible synergistic effects between exposures to different hazards, and effects on especially sensitive populations. Calling attention to these risks, even if they are minuscule, may make the total risk look larger. If they choose to disaggregate whole-population risks, they must decide how to dis- aggregate. And they must decide whether to enumerate potentially significant risks that have not been analyzed. Doing so gives ammuniton to those who believe the true risk has been underestimated. Giving more information sacri- fices clarity, gains accuracy, and may also shift the focus of an audience's attention, and thus affect the ensuing debate. In short, alternative simplifica- tions of the same information can be equally accurate, but have different po- litical effects.

Framing. Risk messages, by giving meaning to knowledge, can also have political effects. Cognitive psychologists have shown that such 'framing' can affect audiences' judgments (Tversky and Kahneman, 1981). The classic example is the evidence that most people are willing to gamble on a medical treatment that promises to save 200 of 600 people threatened by an epidem- ic, but few will take the same Chance on a treatment that would reduce the expected death toll from 600 to 400. The number who would live or die is the same; the only change is from a focus on saving lives to losing them.

Messages 'frame' risks whenever they address uncertainties in risk esti- mates: they choose between presenting the full range of available estimates; presenting a restricted range, say one or two standard errors around a point estimate; or offering only a point estimate drawn from a model or an expert consensus proces. Point estimates give the simplest, clearest message, but they can be dangerously misleading when scientific uncertainty is great.

The facts often leave wide latitude for judging how much uncertainty to

Page 11: Learning through conflict: a realistic strategy for risk communication

109

present. As already noted, government agency estimates of the potency of carcinogens have varied by as much as a factor of 1013 (Anderson, 1988). They might vary even more widely were it not for the common practice of using some identical analytic methods (e.g., bioassays) across agencies. A variation of 10 ~-~ in a risk analysis would span a huge range: if the low end of such a range means one premature death in the United States in 40,000 years, the high end is the deaths of the entire population in one year. A public of- ficial who presented such a range of possible outcomes would be ridiculed or worse, so there is an irresistable temptation to narrow the range of possibili- ties. But by how much? Would it be best to say that there may be 100 extra deaths per year (the geometric midpoint of the range), that deaths may range from one in the U.S. per 100 years to 10,000 per year (a range of 106 around the midpoint), or that there is a very small but nonzero probability of wiping out the whole population? And how is one to decide? Of course, the best science should be used to establish the degree of uncertainty as precisely as possible, but the extent of uncertainty can be hard to determine, especially when it is part of a politically tinged debate involving scientists whose judg- ments reflect different policy predilections. In an important debate of the 1970s about the risks of nuclear power plant operation, the industry suffered politically when it was shown that previous analyses had underestimated not the risk itself but the uncertainty of the risk estimate (Nuclear Regulatory Commission, 1978).

Often, a political consideration intrudes into descriptions of uncertainty: any announcement that catastrophe is possible, even at very low probability, is likely to generate strong political pressures against whatever alternative has that potential. Such messages 'alarm the public" The decision about what message to present is affected by a judgment of whether the public ought to be alarmed, a judgment that is easily influenced by momentary political or bureaucratic pressures, but that is hard to make dispassionately by analysis of the comparative social costs of false alarms versus false reassurances. Thus, it is hard for a message designer to make an unbiased judgment about how much uncertainty, to describe, but easy to submit to political pressures on that judgment.

Another example illustrates how a single set of facts can be 'framed' in differcnt ways to promote different policy agendas. A Washington Post story on the Soviet government analysis of the potential health effects of the Cher- nobyl disaster ran under the headline, 'Chernobyl Report Surprisingly De- tailed but Avoids Painful Truths, Experts Say' (Smith, 1986). The 'painful truths' were that the disaster may result in 35,000 to 45,000 cancer deaths in the Soviet Union and that 'as many as 90,000 people could be affected by the recent explosion ? What the Soviet report said was that fatalities would be 'less than 0.05 percent in relation to the death rate due to spontaneously arising cancer" As this percentage works out to 35,000 to 45,000 premature deaths over the lifetimes of the people exposed, the Soviet report is no less accurate than the U.S. reformulation of it. But the two messages are very different. A

Page 12: Learning through conflict: a realistic strategy for risk communication

110

nuclear physicist with the Natural Resources Defense Council was quoted in the article as saying, 'This is a time-honored way to minimize health effects. ... It's a common way to hide the truth.'

Many other examples could be cited or created to show how the same truth can be made to look safe or risky. In efforts to encourage the use of seat belts, researchers have tried framing the risks as the probability of death or injury in a lifetime of driving rather than as the probability of mishap on a single automobile trip, to make the danger appear greater (Slovic, Fischhoff, and Lichtenstein, 1978). And it may be easy to use comparisons of risks to influence and mislead people, because it is known that people generally underestimate the number of annual deaths from some causes (e.g., asthma and lightning) and overestimate the death toll from others (e.g., tornadoes, botulism) (Lichtenstein et al., 1978). Thus, comparing a risk to the probability of death by lightning would probably make it look smaller than it is. All of these examples illustrate that an 'unbiased' message cannot be defined: there are no evident guidelines for choosing between messages that are equally clear and accurate but that are likely to create different impressions and con- sequently have different practical consequences.

Scientific definitions of 'bias' do not give practical guidance. The statistical definition of bias refers to the relationship between the observed mean and the 'true,' or population mean; it does not define bias in presenting uncertain- ty or in making risk information meaningful to nonexperts. And the judgment of scientific experts is not a good guide either. To reliably estimate the uncer- tainty of their own risk estimates, scientists need an adequate base of data, such as weather forecasters obtain over time. But where data are lacking - typically the case with controversial choices - scientists tend toward overcon- fidence in their ability to estimate and present overly narrow bounds for their uncertainty (Lichtenstein, Fischhoff, and Phillips, 1982). They have even sometimes omitted from consideration as too improbable to estimate events that have later come to pass.

Because there is no one 'unbiased' way to present available knowledge, risk messages can typically be contradicted by equally accurate messages. This outcome is assured when a choice about risk provokes conflict. Truthful messages will often be attacked as distorted by those who interpret the facts differently. Better risk analysis and further clarification of areas of scientific consensus and disagreement will not eliminate conflict because disputes will continue about what the proper emphasis of risk messages should be and about how much uncertainty a risk message should represent to a nonexpert audience. Better understanding of message design will not eliminate conflict either. It cannot change the pressures to make accurate messages support dif- ferent political positions, and it may even give political actors better ways to persuade people subtly, without doing violence to the facts. Neither natural nor social science can eliminate conflict between risk messages because there is no scientific basis for determining bias in factually accurate messages. The often-noted loss of public confidence in scientific experts must be understood

Page 13: Learning through conflict: a realistic strategy for risk communication

111

in this context: Science is being asked to proyide a single image of what is known when there are only different perspectives - and scientists throw fuel on the fire when they believe they can fulfill the request.

Implications for risk communication

Interested citizens have become intuitively skeptical about risk messages. They have experience with deceitful or false messages and seem to be gen- erally aware that the thrust of risk messages can readily be predicted from the political agendas of their sources; that truth can take either side in a dispute depending on which parts of it are highlighted and how it is framed; and that designers of risk messages can package the truth to convey a message favora- ble to the interests they represent. 4

Given this healthy skepticism, what might be done to get trustworthy infor- mation to nonexperts, including public officials? The above discussion makes clear that one obvious strategy, to set standards for the presentation of risk information, would be impossible to implement. There is no scientific ratio- nale for requiring one form of presentation over another. And even if uniform standards were imposed, for example, in all U.S. government agencies, various interested parties outside government would exercise their Constitu- tional rights to use other methods of presentation, and citizens would still be exposed to the underlying disagreements about the practical significance of available knowledge. The search for the definitive risk message or the single, trustworthy information source is fruitless.

A variant on the prescriptive strategy is to give general, voluntary guidance to risk message designers. They might, for instance, be admonished to recog- nize the power of subtle changes in the way information is presented and use such knowledge responsibly. But even if guidelines could specifically describe responsible ways to use this recognition, they could not ensure that message designers would be 'responsible" One may wish them to be responsible to a broad public interest, but they in fact are responsbile in more immediate ways to their employers and their employers' constituencies, which may be partisan in any controversy. So, it is unwise to rely heavily on message designers as individuals to be responsible. It is more prudent to expect that they will res- pond to the incentives in their immediate environment and that the communi- cation system will continue to be filled with conflicting messages.

Strategies focused on risk messages are unrealistic because they ignore two basic facts of life, already mentioned, about technological controversies in democracies. The first is that the environment of risk information is inherent- ly full of conflicting messages. Even if some public agency could consistently produce reliable scientific assessments and could devise clear and accurate descriptions of them, conflict over the characterization of risks would not dis- appear. The agency would come under politically inspired pressure to change its messages, and other political actors would produce and disseminate con-

Page 14: Learning through conflict: a realistic strategy for risk communication

112

flicting packages O f information or frame the same information in alternative ways. As long controversy continues and error has high potential costs, dis- sident claims will demand and receive news coverage and public attention. In a democracy, regardless of what state agencies say, alternative truths will al- ways be presented.

The other fact of life is that interested parties have unequal resources for collecting and interpreting the scientific knowledge on which risk messages are based. Because policy choices in the United States are strongly influenced by scientific and technical knowledge and because the interpretation of such knowledge is inherently controversial, groups that lack resources to command scientific expertise are at a serious disadvantage in political debates.

The final section of this article addresses two approaches to risk communi- cation that are more realistic than those that focus on risk messages because they are more attentive to these facts of life. Rather than trying to tell people what 'the facts' are, they try to make it easier for citizens to make their own judgments, informed by the conflict. One approach is educational, and aims to give citizens a better general understanding of science and communication, so that they can become wiser interpreters of risk messages. The other is structural, and attempts to harness the inevitable conflicts to give citizens and public officials a picture that involves a range of viewpoints.

Realistic approaches to risk communication

Education

The risk communication process can be improved indirectly by education that equips citizens to participate more effectively in it. The goal for basic education should be to replace the widely taught misconception that science can establish single unambiguous truths with an appropriate understanding of the meaning of scientific disagreement. To accomplish this, science curricula need to be more honest about the scientific process: progress occurs not by discoveries of truths that are immediately recognized as such, but by a con- tinuing search for truth among competing theories and conflicting evidence. Citizens with this basic understanding will be less surprised and confused by conflict between scientists and more ready to respond to the conflict by searching actively for practical understanding.

The goal for adult education should be to make citizens more discriminat- ing consumers of risk messages. Accomplishing this might include specific efforts to inform people about how highlighting, framing, and other message design techniques can be used to influence audiences' judgments. Specific knowledge of this sort might help refine citizens' diffuse skepticism of experts' messages and enable them to ask pointed questions about particular elements of messages. Educated consumers would raise the standards for risk mes- sages by unmasking the cruder half-truths and distortions and discrediting

Page 15: Learning through conflict: a realistic strategy for risk communication

113

their perpetrators; message designers would have an incentive to be more careful.

The educational strategy is valuable because it recognizes the inevitability of conflict between risk messages and helps prepare the consumers of those messages to make intelligent use of that conflict. In addition, because it is decentralized, education can raise the level of debate even in locales and on issues that lack organized groups to represent the full range of values and interests at stake. But the educational strategy is incomplete because it does not address the inequality of resources in risk communication. Nonexperts educated about the rhetorical uses of science and statistics will always be at a disadvantage in a debate with technical experts equally educated in rhetoric.

Structural change

A structural approach to risk communication begins with the view that con- flicting messages are not only sources of confusion but also perspectives on the truth. It assumes that truth emerges from dialogue in which all sides of an issue are represented. To compensate for inequality of resources, a structural strategy should aim to ensure that various perspectives are adequately repre- sented by risk messages that make those perspectives and their implications clear. In short, the goal is for the risk communication system to promote an improved discourse leading to well-informed, democratic choices (Habermas, 1970, Burns and Ucbcrhorst, 1988).

A structural approach employs principles, such as checks and balances, openness, equal access to communication channels, and separation of powers, that are used in scientific communication and, more to the point, are central to democratic resolution of political disputes. Controversies about risk are something like scientific controversies, but at bottom, they are politi- cal.

Consider welfare policy as an analogy. In welfare debates, the same facts may be used by some people to argue that a program of aid to indigent fami- lies with dependent children protects the children, and by others to argue that it subsidizes parental irresponsibility. By presenting both interpretations of the facts, the political system gives citizens a clearer idea of what is at stake than they might glean from only one message, or from a dry cost-benefit ana- lysis. In like manner, different risk messages based on the same knowledge may show that society can expect a small increase in the death rate, which amounts to a large number of excess deaths, or that few will be harmed, but that the harm will be concentrated in one or two vulnerable groups. In both welfare policy and risk management, accurate but apparently conflicting messages can give information about what is at stake for different constituen- cies. Such information can be as important for an informed public debate as technical information, such as about the probability of cancer given a partic- ular exposure to a chemical. It is certainly as important for reaching political

Page 16: Learning through conflict: a realistic strategy for risk communication

114

solutions. This section suggests some ways of implementing structural chan- ges to make conflict between risk messages better serve the goal of informing democratic choice.

Countervailing incentives. If risk analysts and message designers are under pressure from their employers or other powerful interests, it may help to de- vise countervailing incentives to support those who resist pressure. This might mean systems of protection for risk-communication whistle-blowers in public agencies - people who expose efforts to put pressure on risk analysts to skew their messages for political purposes. The record of protection for whistle- blowers in government is spotty, which may mean that effective protections of this type can be implemented only with strong backing from the highest levels.

Independent evaluation of risk messages. One might imagine a method of peer review for risk messages similar to those used for scientific reports. Government agencies might create extramural peer review panels to work with their scientific analysts and public relations departments to ensure that risk messages meet guidelines for clarity, comprehensibility, and accuracy. Industrial companies, trade associations, and environmental groups might do the same.

Of course, accuracy is not enough. Agencies, industrial groups, and en- vironmental organizations that aim to inform rather than to influence their audiences might use a prepublication review process to seek 'balance' in their messages. 5 They might use methods developed for the purpose by such insti- tutions as the National Science Foundation, the Office of Technology Assess- ment, or the National Research Council. NSF has routines for reviewing scientific messages to the public, which it uses in preparing its NOVA televi- sion series. OTA and NRC deal with the impossibility of unbiased messages by constituting their panels to represent a range of viewpoints, so that a con- sensus of the panel yields a message that is accurate and also 'balanced' in the sense that it is acceptable to experts with a range of judgments and political positions. Where extreme controversy is expected, consensus panels might be convened to actually devise messages that would be understandable to a broad audience, faithful to available knowledge, and acceptable to represen- tatives of a wide range of views.

Although the strategy of seeking balance often gives consensus panel reports wide credibility, it is, of course, not the end of risk communication. Precisely because consensus messages reject some of the prominent view- points, political actors whose viewpoints were not represented or did not pre- vail will continue to publicize their own versions of the truth. The continued conflict may frustrate participants in a consensus process, but it sometimes helps advance the policy debate.

Watchdog organizations. Because even 'balanced' messages are sometimes contested, it may be helpful to systematize the process. Interested institutions

Page 17: Learning through conflict: a realistic strategy for risk communication

115

sometimes act as watchdogs, providing independent evaluations of risk mes- sages that do not claim to be balanced, but rather to provide accurate infor- mation from an interested perspective. A good example is the Consumers Union, which works in consumers' interests to monitor claims about commer- cial products and services. Other consumer and environmental groups per- form the same function informally in risk debates by writing articles, making public statements, and filing testimony to question risk characterizations they find confusing or misleading. Such analyses are highly credible to particular audiences, which rely on them in deciding whether to trust risk messages from other sources, including consensus panels. The effectiveness of watchdogs as checks depends, of course, on their competence and command of sufficient resources across all the issues and in all the locales where they are needed. At present, those conditions are not met.

Institutionalized debate. Risk communication might be improved by opening the present system of institutionalized debate, which occurs mostly in regula- tory proceedings, legislative hearings, courtrooms, and other such settings. Few citizens can participate meaningfully in these forums, in which the key participants are specialists in science and law who interpret basic research to important nonexperts, such as judges, legislators, and regulatory officials. Many of these specialists act as lobbyists or legal counsel for political actors and thus serve as proxies for segments of the general public. But the process is an incomplete proxy for a public debate, because even though it involves presentation of conflicting representations of expert knowledge to nonex- ports, the special rules of each setting alter the process, and the debate is most often out of public view.

It may be useful to devise structures of institutionalized debate that make it easier for interested citizens to see the debate - to gauge the extent of disa- greement, to tell the center of opinion from the fringes, and to discern the most important bases of disagreement. Institutions such as the Scientists' Institute for Public Information, which helps journalists identify the opposing sides and find articulate spokespersons for each, are one structure of this kind. Another is the debate-type programs that sometimes appear on public television. And it may be useful sometimes to modify the old idea of a science court, not to reach final decisions on matters of fact as if they can be sharply distinguished from matters of value, but to provide citizens with a new infor- mation source: the considered judgment of a jury of individuals of varying political persuasions who have taken time to consider the conflicting argu- ments about a particular set of risks and to make their best estimate of which knowledge is secure enough to rely on.

Distributing resources for risk communication. As already noted, the where- withal for designing and disseminating risk messages is unequally distributed in the society. The most thorough efforts are probably made by industrial groups whose interests can be deeply affected by public perceptions of risks

Page 18: Learning through conflict: a realistic strategy for risk communication

116

associated with their industries. Johnson and Johnson is said to have commis- sioned several national surveys in the wake of the Tylenol scare to guide its public response, and the electric power industry has spent milions on adver- tising campaigns to bolster the public image of nuclear energy. The federal government is probably a distant second in expenditures on risk messages, and local governments, environmental groups, consumer activists, and com- munity groups undoubtedly lag far behind. This skewed distribution of resources may result in a net bias in risk messages where industrial interests are involved. It almost certainly results in a perception of bias and in a dim- inution of public trust in risk information.

It is possible to even somewhat the distribution of resources by publicly funding watchdog organizations or even by giving poorly endowed groups support for developing and disseminating accurate risk messages that embo- dy alternative viewpoints. Although this strategy might raise the temperature of the debate and would certainly be opposed by interests with large commu- nication budgets, it would make it easier for interested citizens to learn how the risks are understood by groups they consider trustworthy. A less con- troversial approach would be to support information clearinghouses or hotlines so that information from groups less well-endowed with adver- tising and dissemination funds will be readily accessible to interested mem- bers of the public and the press. Both these approaches address the problem of obtaining credible information by making diverse sources available and allowing audiences to choose the sources they trust. To avoid outright mis- information and blatant slanting of messages, standards of accuracy could be established for publicly funded messages and clearinghouses.

Conclusions

When a technological choice is controversial, 'the facts' do not speak with one voice. As a consequence, risk messages in a democracy will inevitably conflict. This reality may disturb those who see in 'risk communication' a technology for producing quick consensus, but it should not disturb those whose goal is wise, durable, and widely accepted choices about controversial technologies. As with scientific debate, these outcomes are more likely to result from open airing of disagreements than from efforts to standardize one point of view.

I have shown that there are policy approaches that accept the reality of conflicting risk messages and that put conflict to use to raise the level of public debate. But how realistic is this realism? How much help can these approaches offer, and how politically feasible are they? The approaches that do most to promote learning through conflict are those that bring conflict farther out in the open, clarify its dimensions, and redress the imbalance of resources between interested parties. In the short run, such approaches tend to make debate more heated, and their effectiveness in the long rtm depends

Page 19: Learning through conflict: a realistic strategy for risk communication

117

on an appropriately educated attentive segment of the public that will inter- pret conflicting information rather than throw it aside. They also face a problem of feasibility, because of likely intense opposition from powerful political actors with substantial short-term interests at stake and a desire for quick resolution of conflicts. But the experience of frustration in gaining con- sensus is teaching some key players in industry and government that a broader view of risk communication is necessary. ~ As they recognize that one-way messages alone do not produce consensus, they may become more willing to support strategies that use conflict as an aid to societal learning. I expect continuing conflict over which approaches should be implemented. Ideally, the societal choices that pose the most difficult and persistent con- flicts should be met by the most aggressive efforts to open the debate and use the conflict creatively for learning. If this does not happen, more frustration is in store.

Notes

1. This paper has benefited from the work of the Committee on Risk Perception and Com- munication of the National Research Council/National Academy of Sciences, which I served in a staff capacity-. The views expressed here, however, are entirely my own and do not necessarily represent the positions of the committee, the Council, or the Academy.

2. The frame of reference for this discussion is 'technological risks; that is, risks generally con- sidered to result from human artifice. As a rule, risks that are considered 'natural' are freer from conflict and therefore less burdened by the problems discussed here. However, this rule is soft because the distinction between the natural and the artificial is socially created and subject to change. In recent years, risks such as those of breathing air, drinking water, and living in the Gangetic flood plain have come to be seen as less 'natural' and more 'tech- nological;

3. Numerous social-psychological studies indicate that when people believe harms have been done and see themselves as responsible for perpetuating or alleviating them, they are moti- vated by their own moral judgment to act (see Schwartz, 1977). Peoplc scem to apply a simi- lar moral judgment to corporations they see as responsible for hazardous chemicals in the environment (Stern, Dietz, and Black, I986).

4. Even the term 'risk' is a way of framing issues that may have political implications, as Lang- don Winner argues in his essay, 'On Not Hitting the Tar Baby' (Winner, 1986). Winner notes that in U.S. culture, risk connotes a chance taken voluntarily in the expectation of gain, a description that rings false when applied to people faced with the siting of a waste manage- ment facility near their homes or subjected to acid precipitation from sources hundreds of miles upwind. Winner notes that an entirely different connotation is given by referring to such situations as 'hazards" or 'dangers? With those labels, the presumed goal is to eliminate or minimize them, while with risks the goal is to manage them. It may be significant in this regard that in the anti-regulatory atmosphere of the Reagan administration, the lead en- vironmental poiicy agency, which is charged by law with 'environmental protection,' rede- fined its mission as 'risk management?

5. For a discussion of the distinction between informing and influencing, see National Research Council (1989: 80-93).

6. Learning is evident, for instance, in the consensus conclusions of the recent National Research Council Committee on Risk Perception and Communication (NRC, 1989) and in their generally favorable reception by the government and industry representatives at the roundtable discussion when the report was issued.

Page 20: Learning through conflict: a realistic strategy for risk communication

118

References

Ames, B.N. (1979). 'Identifying Environmental Chemicals Causing Mutations and Cancer,' Science 204: 587-593,

Ames, B.N., R. Magaw, and L.S. Gold (1987). 'Ranking Possible Carcinogenic Hazards,' Science 236: 271-280.

Anderson, P.D. (1988). 'Scientific Origins of Incompatibility in Risk Assessment,' Statistical Science 3: 320-327.

Burns, T. R. and R. Ueberhorst (1988). Creative Democracy: Systematic Conflict Resolution and PotkTmaking in a Worm of High Science and Technology, New York: Praeger.

Cleveland, W. S. (1986). The Elements of Graphing Data, Belmont, Calif.: Wadsworth. Covello, V., E M. Sandman, and E Slovic (1988). Risk Communication. Risk Statistics, andRisk

Comparison: A Manual for Plant Managers, Washington, D.C.: Chemical Manufacturers Association.

Dickson, D. (1984). The New Politics of Science, New York: Pantheon Books. Dietz, T. and R. W. Rycrofl (1987). The Risk Professionals, New York: Russell Sage Foundation. Dietz, T., E C. Stem, and R. W. Rycroft (1989). 'Definitions of Conflict and the Legitimation of

Resources: The Case of Environmental Risk,' Sociological Forum 4: 47-70. Frcudenburg, W. R. (1988). 'Perceived Risk, Real Risk: Social Science and the Art of Proba-

bilistic Risk Assessment,' Science 242: 44-49. Gilligan, C. (1982). In a Different Voice: Psychological Theory and Women's Development,

Cambridge, Mass.: Harvard University Press. Gould, L. C., G. T. Gardner, D. R. DeLuca, A. R. Tiemann, L. W. Doob, and J. A. J. Stolwijk

(1988). Perceptions of Technological Risks and Benefits, New York: Russell Sage Foundation. Gould, S. J. (1981). The Mismeasure of Man, New York: Norton. Habermas, J. (1970). Toward a Rational Society: Student Protest, Science, and Politics, Boston:

Beacon Press. Hohenemser, C., R.W. Kates, and E Slovic (1983). 'The Nature of Technological Hazard,'

Science 220: 378-384. Holdren, J. E, K. R. Smith, and G. Morris (1979). 'Energy: Calculating the Risks (II),' Science

204: 564-567. Inhaber, H. (1979). 'Risk with Energy from Conventional and Nonconvential Sources,' Science

203: 718-723. Kahneman, D., E Slovic, and A. Tversky (1982). Judgment Under Uncertainty: Heuristics and

Biases, Cambridge: Cambridge. Keller, E. E (1983). A Feeling for the Organism: The Life and Work of Barbara McClintoek, New

York: Freeman. Lave, L. B. (1987). 'Health and Safety Risk Analyses: Information for Better Decisions,' Science

236: 291-295. Lichtenstein, S., B, Fischhoff, and L. D. Phillips (1982). 'Calibration of Probabilities: The State

of the Art to 1980, in D. Kahneman, R Slovic and A. Tversky, eds. Judgment Under Uncer- tainty: Heuristics and Biases, Cambridge: Cambridge.

Lichtenstein, S., E Slovic, B. Fischhoff, M. Layman, and B. Combs (1978). 'Judged Frequency of Lethal Events" Journal of Experimental Psycholo~: Human Learning and Memory 4: 551- 578.

Lowrance, W.W. (1976). Of Acceptable Risk. Science and the Determination of Safety, Los Altos, Calif.: William Kauffmann, Inc.

Marshall, C. (1987). 'Fetal Protection Policies: An Excuse for Workplace Hazard,' Nation 244: 532-534 (April 25).

Miller, N. E. (1969). 'Learning of Visceral and Glandular Responses,' Science 163: 434-445. National Research Council (1983). Institutional Means for Risk Assessment in the Federal

Government, Washington: National Academy Press. National Research Council (1989). Improving Risk Communication, Washington: National

Academy Press.

Page 21: Learning through conflict: a realistic strategy for risk communication

119

Nuclear Regulatory Commission (1978). Risk Assessment Review Group Report to the United States Nuclear Regulatory Commission, NUREG/CR-0400. Washington: Author.

Otway, H. J. and D. you Winterfeldt (1982). 'Beyond Acceptable Risk: On the Social Accepta- bility of Technologies; Policy Sciences 14: 247-256.

Rosenthal, R. and D. B. Rubin (1982). 'A Simple, General Purpose Display of the Magnitude of Experimental Effect,' Journal of EducationalPsychology 74: 166-169.

Schnaiberg, A. (1980). The Environment." From Surplus to Scarci~, New York: Oxford. Schwartz, S. H. (1977). 'Normative Influences on Altruism" in L~ Berkowitz, ed. Advances h~

Experimental Social Psychology (Vol. 10, pp. 221-279). New York: Academic Press. Slovic, P. (1987). 'Perception of Risk,' Science 236: 280-285. Slovic, 17., B. Fischhoff, and S. I~ichtenstein (1978). ~Accident Probabilities and Seat Belt Usage:

A Psychological Perspective" Accident Analysis and Prevention 10:281 285. Smith, R. J. (1983). 'White House Names New EPA Chief,' Science 220: 35-36. Smith, R.J. (1986). 'Chernobyl Report Surprisingly Detailed but Avoids Painful Truths,

Experts Say,' Washington Post, August 27: A25. Stern, P. C., T. Dietz, and J. S. Black (1986). 'Support for Environmental Protection: The Role

of Moral Norms,' Population and Environment 8: 204-222. Sun, M. (1984a). %cid Rain Report Allegedly Suppressed,' Science 225: 1374. Sun, M. (1984b). 'OSHA Rule is Curbed by Budget Office,' Science 225: 603-604. Trotsky, A. and D. Kahneman (1973). 'Availability: A Heuristic for Judging Frequency and

Probability; Cognitive Psychology 5: 207-232. Tversky, A. and D. Kahneman (1981). 'The Framing of Decisions and the Psychology of

Choice,' Science 211: 453-458. yon Winteffeldt, D. and W. Edwards (1984). 'Patterns of Conflict About Risky Technologies,'

Risk Analysis 4: 55-68. Whittemore, A. S. (1983). 'Facts and Values in Risk Analysis for Environmental Toxicants,'

RiskAnalasis 3 (1): 23-33. Winner, L. (1986). 'On Not Hitting the Tar Baby,' Chapter 8, in L. Winner, The Wha& and the

Reactor: A Search for Limits in an Age of High Technology, Chicago: University of Chicago Press.