1
On the News Stand: A Meta-Analysis of the Effect of Pretrial Publicity on Guilt Sara Marie House Loyola University Chicago Introduction Pretrial publicity (PTP) refers to any news information appearing before a case has gone to trial. It is a cause for concern because it can be biasing to potential jurors. The courts use a variety of remedies to counteract the possible effects of pretrial publicity: Continuance – wait for a period of time before beginning the trial Extended voir dire – ask more questions of potential jurors Admonition – tell jurors to disregard anything learned before trial Change of venire – use potential jurors from another jurisdiction Change of venue – move the trial to another jurisdiction Unfortunately, there is not a lot of evidence in research that these remedies actually work. Many people associated with the justice system, however, have lamented that current research findings do not provide information on how pretrial publicity can influence jury verdicts (Chesterman, 1997; Jones, 1991). Furthermore, even among pretrial publicity researchers, this issue is a matter of debate. The fact that there is so much variation in not only findings but methodology only fuels this debate further. The current study is one attempt to discover how and when pretrial publicity can bias verdicts, and to answer questions on whether PTP has an effect posttrial, when it has an effect, and whether remedies work. Past Meta-Analysis The most recent, and only published, meta-analysis was performed by Steblay, Besirevic, Fulero, and Jimenz-Lorente (1999). They used 44 independent effect sizes with an overall sample size of 5,755. Using a fixed-effects model, Steblay, et al., found a mean effect size of 0.16 (95% CI = -0.13, 0.46) Several moderators were examined: study design, source of sample, time of verdict, delay between exposure to PTP and verdict, content of PTP, specificity (case-specific versus general), crime, and media (newspaper, video, or both). Each moderator was analyzed individually; all effect sizes were categorized on the moderator of interest, and average effect sizes were calculated at each level. Z-tests were performed on these averages to test whether they were significantly larger than 0. Unfortunately, this method did not allow significance testing on differences between averages, nor did it allow all moderators to be analyzed at once. The current meta-analysis uses 104 effect sizes from 99 independent samples; using weighted least squares multiple regression, multiple moderators were analyzed at once to determine a moderator’s ability to account for unique variance. In addition, the present meta-analysis used random-effects models for all analyses, which Borenstein, et al. (2009) argue tends to be more theoretically justified. Hypotheses Timing Hypothesis: Pretrial effect sizes will be larger than posttrial effect sizes. Length Hypothesis: Longer trial materials will produce smaller effect sizes. Amount Hypothesis: Studies using larger amounts of PTP will have larger effect sizes. Strength Hypothesis: Studies using cases with moderate-strength evidence will produce larger effect sizes than studies using weak or strong evidence. Sample Hypothesis: Studies using student samples will have similar results as studies using non-student samples. Remedy Hypothesis: Remedies will reduce the effect of PTP on guilt. Pretesting Hypothesis: Studies using pretesting will have larger posttrial effect sizes than studies not using pretesting. Other variables were expected to have some effect, but it was unclear from past research and theory how these variables might influence results. These analyses are exploratory. Results of Pretrial Effect Sizes 37 independent effect sizes, with a total N of 7,629 were used. Fixed-effects analysis of the effect sizes yielded an average r̅ = 0.301, SE = 0.011, Z = 28.47, p < 0.0001. The analyses revealed that there was, as expected, substantial variation in effect sizes, Q (df 36) = 457.804, p < 0.0001, I 2 (percentage of variance not attributable to sampling error) = 92.14%. Therefore, random effects analyses were used: Analyses revealed no significant moderators (see Meta- Regression Results table). The random-effects component, however, explained a significant proportion of variance,. The grand mean effect size was r = 0.323, SE = 0.042, Z = 7.96, p < 0.001, 95% CI = 0.245, 0.397, 95% PI = -0.223, 0.893, fail-safe N = 114. Results of Posttrial Effect Sizes 67 independent effect sizes, with a total N of 10,545 were used. Fixed-effects analysis of the effect sizes yielded an average r = 0.163, SE = 0.009, Z = 17.47, p < 0.0001. Once again, the analyses revealed that there was substantial variation in effect sizes, Q (df 66) = 378.423, p < 0.0001, I 2 = 82.29%. Analyses revealed significant moderators (see Meta-Regression Results table). The grand mean effect size was r = 0.163, SE = 0.023, Z = 7.14, p < 0.001, 95% CI = 0.118, 0.207, 95% PI = -0.222, 0.550, fail-safe N = 178. The posttrial mean effect size was smaller than the pretrial mean effect size. In addition, the 95% confidence intervals do not overlap, meaning that these two values can be considered significantly different at the 0.05 level. This confirms the ‘timing hypothesis’. Methods Inclusion/Exclusion Criteria Any study providing data on a statistical relationship between pretrial publicity (displayed through a mass media source) and at least one measure of guilt, which could be either dichotomous (not guilty/guilty) or continuous, was considered for meta-analysis. Adequate information about sample characteristics and how the sample was obtained needed to be provided. Studies examining the effect of positive PTP were not included in the present meta-analysis. Studies could take place in the United States, Canada, or Great Britain. Due to these criteria, all studies obtained were in English. Moderators Source of sample Method of assignment to groups Information contained in the PTP Type of trial – Criminal, Civil Crime(s) – Homicide, sexual assault, other violent, other nonviolent Type of media Stimuli provided to control group Remedies Strength of evidence – inferred in one of three ways: 1) statement by author of case strength, 2) conviction rate from pilot group, or 3) conviction rate from control group; less than 35% = weak, 35 to 69% = moderate, and 70% or higher = strong Characteristics of participants (gender, race, etc.) – dropped due to missingness Characteristics of defendant – dropped due to missingness Characteristics of victim – dropped due to missingness Coding Coding of reports was performed by the study author. Interrater reliability is currently being assessed; no information is available at this time. Statistical Methods Effects sizes were computed as Pearson’s r, which were converted using Fisher’s z-transformation prior to analysis. Results were then converted back to Pearson’s r. If a study had more than one group or more than one measure of the dependent variable, information was combined to create a single effect size. Variances for these averaged effect sizes were recalculated using the procedure provided by Borenstein, et al. (2009). Time of measurement was expected to have an effect, so these values were not averaged together and pretrial were analyzed separately from those measured posttrial. Random-effects models were used in both cases. A random-effects model assumes that there is not one true effect size, but rather a distribution of effect sizes, based on a variety of study characteristics (Borenstein, et al., 2009; Cooper, 2010; Lipsey & Wilson, 2001). In order to incorporate multiple moderators into the analyses, weighted least squares regression (also referred to as meta-regression: Borenstein, et al., 2009) with methods of moments estimation was performed (based on the SPSS macros provided by Wilson, 2002); the procedure allows for a random intercept but uses fixed slopes. This analysis enters moderators first, then computes the random error component based on the remaining variance (Lipsey & Wilson, 2001). Meta-Regression Results Pretrial Verdict Posttrial Verdict Variable B 95% CI B 95% CI Constant 0.642 -0.174, 1.457 0.013 -0.457, 0.484 Sample -0.064 -0.194, 0.065 0.006 -0.060, 0.072 Media Type -0.024 -0.370, 0.322 -0.033 -0.176, 0.111 PTP Information -0.067 -0.309, 0.175 0.142 -0.032, 0.316 Control Group Instructions -0.045 -0.184, 0.094 0.033 -0.013, 0.079 Homicide 0.041 -0.169, 0.250 0.117 -0.013, 0.247 Sex Offense -0.003 -0.227, 0.221 -0.178* -0.325, -0.031 Other Violent Crime -.0137 -0.366, 0.093 -0.056 -0.161, 0.049 Other Non-Violent Crime 0.023 -0.241, 0.288 -0.103 -0.297, 0.090 Delay -0.053 -0.392, 0.286 -0.055 -0.197, 0.086 Strength of evidence - moderate compared to weak and strong 0.198* 0.069, 0.327 Strength of evidence - strong compared to weak and moderate 0.034 -0.191, 0.259 Deliberation 0.109 -0.027, 0.246 Trial presentation format -0.030* -0.059, -0.0004 Judicial admonition 0.057 -0.068, 0.182 Judicial instructions -0.107 -0.256, 0.043 Voir dire -0.061 -0.271, 0.150 Pretesting -0.080 -0.201, 0.042 Strength of Evidence k N r SE 95% CI Weak 26 2977 0.086* 0.031 0.011, 0.159 Moderate 37 6162 0.213* 0.031 0.154, 0.270 Strong 4 1406 0.159 0.089 -0.017, 0.325 Crime k N r SE 95% CI Civil 3 322 0.238* 0.118 0.013, 0.440 Homicide 13 1900 0.290* 0.055 0.193, 0.381 Other Non-Violent 7 1047 0.083 0.071 -0.052, 0.215 Other Violent 16 2943 0.188* 0.045 0.098, 0.275 Sex Offense 9 1349 0.078 0.063 -0.048, 0.201 Combination of Homicide and Other Crimes 16 2948 0.105* 0.045 0.015, 0.195 Unknown Combination 4 202 0.159 0.110 -0.053, 0.357 Q df p Effect of moderators 3.89 9 0.92 Variance after moderators 22.63 27 0.70 Overall variance after random-effects 26.52 36 0.88 Q df p Effect of moderators 25.16 17 0.006 Variance after moderators 52.36 49 0.35 Overall variance after random-effects 87.52 66 0.04 Remedies. Finally, exploratory analyses were performed on remedies. Though none of the remedy variables showed significant effects in the meta-regression, this analysis did not allow the testing of what Bruschke and Loges (2004) call the cumulative remedy hypothesis, which states that, while a single remedy may show no significant effect, the effect of PTP may be reduced through use of a combination of remedies. Average effect sizes were computed based on the number of remedies used (ranging from 0 to 4) as well as by combination of remedies used. Remedies k N r SE 95% CI None Overall 12 1962 0.159* 0.055 0.054, 0.260 1 remedy Admonition 10 1101 0.152* 0.063 0.034, 0.266 Delay 7 964 0.106 0.077 -0.040, 0.248 Deliberation 3 464 0.273* 0.105 0.073, 0.452 Instructions 6 1187 0.157* 0.077 0.011, 0.296 Overall 26 3716 0.142* 0.032 0.070, 0.214 2 remedies Admonition + Delay 8 1160 0.169* 0.063 0.043, 0.291 Admonition + Deliberation 1 80 0.139 0.197 -0.244, 0.484 Admonition + Instructions 1 202 0.323 0.176 -0.012, 0.593 Delay + Deliberation 11 1911 0.166* 0.055 0.054, 0.274 Delay + Instructions 1 168 -0.035 0.179 -0.369, 0.308 Deliberation + Voir Dire 1 156 0.111 0.182 -0.239, 0.436 Overall 23 3677 0.175* 0.045 0.099, 0.248 3 remedies Admonition + Delay + Deliberation 3 372 0.223 0.122 -0.013, 0.435 Delay + Deliberation + Voir Dire 1 68 0.228 0.205 -0.167, 0.559 Overall 4 440 0.248* 0.100 0.058, 0.420 4 remedies Admonition + Delay + Deliberation + Instructions 1 702 0.246 0.167 -0.076, 0.521 Admonition + Delay + Deliberation + Voir Dire 1 48 0 0.221 -0.407, 0.407 Overall 2 750 0.160 0.134 -0.100, 0.397 Trial Presentation Method k N r SE 95% CI Brief Summary 15 2234 0.147* 0.045 0.053, 0.239 Transcript 11 1603 0.171* 0.055 0.058, .279 Audio 7 1145 0.154* 0.071 0.020, 0.283 Video 25 4234 0.194* 0.032 0.123, 0.263 Mock Trial 3 144 0.037 0.138 -0.227, 0.295 Actual Trial 6 1185 0.095 0.084 -0.065, 0.250 References Sources used in the meta-analysis are available at: http://saramhouse.bravehost.com/research/ptpmetaanalysis.html Borenstein, M. Hedges, L.V., Higgins, J.P.T., & Rothstein, H.R. (2009). Introduction to meta-analysis. West Sussex, UK: John Wiley & Sons. Bruschke, J., & Loges, W.E. (2004). Free press vs. fair trials: Examining publicity’s role in trial outcomes. Mahwah, NJ: Lawrence Erlbaum. Chesterman, M. (1997). OJ and the dingo: How media publicity relating to criminal cases tried by jury is dealt with in Australia and America. The American Journal of Comparative Law, 45, 109-147. Cohn, L.D., & Becker, B.J. (2003). How meta-analysis increases statistical power. Psychological Methods, 8, 243-253. doi:10.1037/1082-989X.8.3.243 Cooper, H. (2010). Research synthesis and meta-analysis: A step-by-step approach (4 th ed.). Thousand Oaks, CA: SAGE Publications. Curtner, R., & Kassier, M. (2005). “Not in our town”: Pretrial publicity, presumed prejudice, and change of venue in Alaska: Public opinion surveys as a tool to measure the impact of prejudicial pretrial publicity. Alaska Law Review, 22, 255- 292. Dixon, T.L., & Linz, D. (2002). Television news, prejudicial pretrial publicity, and the depiction of race. Journal of Broadcasting & Electronic Media, 46, 112-136. Hedges, L.V., & Olkin, I. (1985). Statistical methods for meta-analysis. Orlando, FL: Academic Press. Imrich, D.J., Mullin, C., & Linz, D. (1995). Measuring the extent of prejudicial pretrial publicity in major American newspapers: A content analysis. Journal of Communication, 45, 94-117. Jones, R.M. (1991). The latest empirical studies on pretrial publicity, jury bias, and judicial remedies: Not enough to overcome the first amendment right of access to pretrial hearings. American University Law Review, 40, 841-. Lipsey, M.W., & Wilson, D.B. (2001). Practical meta-analysis. Thousand Oaks, CA: SAGE Publications. Steblay, N.M., Besirevic, J., Fulero, S.M., & Jimenz-Lorente, B. (1999). The effects of pretrial publicity on juror verdicts: A meta-analytic review. Law and Human Behavior, 23, 219-235. doi:10.1023/A:1022325019080 Studebaker, C.A., & Penrod, S.D. (2005). Pretrial publicity and its influence on juror decision making. In N. Brewer and K.D. Williams (Eds.), Psychology and law: An empirical perspective (pp. 254-275). New York: Guilford Press. Wilson, D.B. (2002). Meta-analysis macros for SAS, SPSS, and Stata. Retrieved September 9, 2009, from http://mason.gmu.edu/~dwilsonb/ma.html Strength of evidence. Though the 95% confidence intervals of weak and strong do overlap, the 90% confidence intervals do not, meaning that these two values can be considered significantly different at the 0.10 level. As expected, studies using moderate case evidence had the largest effect. Exploratory Analyses of Posttrial Effect Sizes Crime. Crime also appears to have a strong influence on effect sizes. Specifically, non-violent crimes and crimes involving sexual assault produce smaller effect sizes. There are two possible explanations for the smaller effect sizs in sexual assault cases: 1) PTP publishes negative information about the victim, and 2) jurors are unsympathetic to victims of sexual assault. Since sexual assault was not a significant moderator pretrial, but was posttrial, the evidence points to explanation 2. Trial presentation method. Finally, since trial presentation method was found to be a significant moderator in the regression, average effect sizes were computed at each level of presentation. Studies using mock trials or actual trials did not find effect sizes significantly larger than 0, though these effect sizes are also based on only a few studies, and could suffer from low power.

APS 2010 Poster

Embed Size (px)

Citation preview

Page 1: APS 2010 Poster

On the News Stand: A Meta-Analysis of the Effect of Pretrial Publicity on GuiltSara Marie House

Loyola University Chicago

Introduction

Pretrial publicity (PTP) refers to any news information appearing before a case has gone to trial. It is a

cause for concern because it can be biasing to potential jurors. The courts use a variety of remedies to

counteract the possible effects of pretrial publicity:

• Continuance – wait for a period of time before beginning the trial

• Extended voir dire – ask more questions of potential jurors

• Admonition – tell jurors to disregard anything learned before trial

• Change of venire – use potential jurors from another jurisdiction

• Change of venue – move the trial to another jurisdiction

Unfortunately, there is not a lot of evidence in research that these remedies actually work. Many people

associated with the justice system, however, have lamented that current research findings do not provide

information on how pretrial publicity can influence jury verdicts (Chesterman, 1997; Jones, 1991).

Furthermore, even among pretrial publicity researchers, this issue is a matter of debate. The fact that there

is so much variation in not only findings but methodology only fuels this debate further. The current study

is one attempt to discover how and when pretrial publicity can bias verdicts, and to answer questions on

whether PTP has an effect posttrial, when it has an effect, and whether remedies work.

Past Meta-Analysis

The most recent, and only published, meta-analysis was performed by Steblay, Besirevic, Fulero, and

Jimenz-Lorente (1999). They used 44 independent effect sizes with an overall sample size of 5,755.

Using a fixed-effects model, Steblay, et al., found a mean effect size of 0.16 (95% CI = -0.13, 0.46)

Several moderators were examined: study design, source of sample, time of verdict, delay between

exposure to PTP and verdict, content of PTP, specificity (case-specific versus general), crime, and media

(newspaper, video, or both). Each moderator was analyzed individually; all effect sizes were categorized

on the moderator of interest, and average effect sizes were calculated at each level. Z-tests were performed

on these averages to test whether they were significantly larger than 0. Unfortunately, this method did not

allow significance testing on differences between averages, nor did it allow all moderators to be analyzed

at once.

The current meta-analysis uses 104 effect sizes from 99 independent samples; using weighted least squares

multiple regression, multiple moderators were analyzed at once to determine a moderator’s ability to

account for unique variance. In addition, the present meta-analysis used random-effects models for all

analyses, which Borenstein, et al. (2009) argue tends to be more theoretically justified.

HypothesesTiming Hypothesis: Pretrial effect sizes will be larger than posttrial effect sizes.

Length Hypothesis: Longer trial materials will produce smaller effect sizes.

Amount Hypothesis: Studies using larger amounts of PTP will have larger effect sizes.

Strength Hypothesis: Studies using cases with moderate-strength evidence will produce larger effect sizes than studies using weak or strong evidence.

Sample Hypothesis: Studies using student samples will have similar results as studies using non-student samples.

Remedy Hypothesis: Remedies will reduce the effect of PTP on guilt.

Pretesting Hypothesis: Studies using pretesting will have larger posttrial effect sizes than studies not using pretesting.

Other variables were expected to have some effect, but it was unclear from past research and theory how these variables might influence results. These analyses are exploratory.

Results of Pretrial Effect Sizes

37 independent effect sizes, with a total N of 7,629 were used. Fixed-effects analysis of the effect sizes yielded an average r ̅

= 0.301, SE = 0.011, Z = 28.47, p < 0.0001. The analyses revealed that there was, as expected, substantial variation in effect

sizes, Q (df 36) = 457.804, p < 0.0001, I2 (percentage of variance not attributable to sampling error) = 92.14%. Therefore,

random effects analyses were used:

Analyses revealed no significant moderators (see Meta-

Regression Results table). The random-effects component,

however, explained a significant proportion of variance,. The

grand mean effect size was r = 0.323, SE = 0.042, Z = 7.96, p <

0.001, 95% CI = 0.245, 0.397, 95% PI = -0.223, 0.893,

fail-safe N = 114.

Results of Posttrial Effect Sizes67 independent effect sizes, with a total N of 10,545 were used. Fixed-effects analysis of the effect sizes yielded an average r

= 0.163, SE = 0.009, Z = 17.47, p < 0.0001. Once again, the analyses revealed that there was substantial variation in effect

sizes, Q (df 66) = 378.423, p < 0.0001, I2 = 82.29%.

Analyses revealed significant moderators (see Meta-Regression

Results table). The grand mean effect size was r = 0.163,

SE = 0.023, Z = 7.14, p < 0.001, 95% CI = 0.118, 0.207,

95% PI = -0.222, 0.550, fail-safe N = 178. The posttrial

mean effect size was smaller than the pretrial mean effect size. In addition, the 95% confidence intervals do not overlap,

meaning that these two values can be considered significantly different at the 0.05 level. This confirms the ‘timing

hypothesis’.

Methods

Inclusion/Exclusion Criteria

Any study providing data on a statistical relationship between pretrial publicity (displayed through a mass

media source) and at least one measure of guilt, which could be either dichotomous (not guilty/guilty) or

continuous, was considered for meta-analysis. Adequate information about sample characteristics and how

the sample was obtained needed to be provided.

Studies examining the effect of positive PTP were not included in the present meta-analysis.

Studies could take place in the United States, Canada, or Great Britain. Due to these criteria, all studies

obtained were in English.

Moderators

• Source of sample

• Method of assignment to groups

• Information contained in the PTP

• Type of trial – Criminal, Civil

• Crime(s) – Homicide, sexual assault, other violent, other nonviolent

• Type of media

• Stimuli provided to control group

• Remedies

• Strength of evidence – inferred in one of three ways: 1) statement by author of case strength, 2) conviction rate from pilot group, or 3) conviction rate from control group; less than 35% = weak, 35 to 69% = moderate, and 70% or higher = strong

• Characteristics of participants (gender, race, etc.) – dropped due to missingness

• Characteristics of defendant – dropped due to missingness

• Characteristics of victim – dropped due to missingness

CodingCoding of reports was performed by the study author. Interrater reliability is currently being assessed; no

information is available at this time.

Statistical Methods

Effects sizes were computed as Pearson’s r, which were converted using Fisher’s z-transformation prior to

analysis. Results were then converted back to Pearson’s r.

If a study had more than one group or more than one measure of the dependent variable, information was

combined to create a single effect size. Variances for these averaged effect sizes were recalculated using

the procedure provided by Borenstein, et al. (2009).

Time of measurement was expected to have an effect, so these values were not averaged together and

pretrial were analyzed separately from those measured posttrial.

Random-effects models were used in both cases. A random-effects model assumes that there is not one

true effect size, but rather a distribution of effect sizes, based on a variety of study characteristics

(Borenstein, et al., 2009; Cooper, 2010; Lipsey & Wilson, 2001). In order to incorporate multiple

moderators into the analyses, weighted least squares regression (also referred to as meta-regression:

Borenstein, et al., 2009) with methods of moments estimation was performed (based on the SPSS macros

provided by Wilson, 2002); the procedure allows for a random intercept but uses fixed slopes. This

analysis enters moderators first, then computes the random error component based on the remaining

variance (Lipsey & Wilson, 2001).

Meta-Regression Results Pretrial Verdict Posttrial Verdict

Variable B 95% CI B 95% CI

Constant 0.642 -0.174, 1.457 0.013 -0.457, 0.484

Sample -0.064 -0.194, 0.065 0.006 -0.060, 0.072

Media Type -0.024 -0.370, 0.322 -0.033 -0.176, 0.111

PTP Information -0.067 -0.309, 0.175 0.142 -0.032, 0.316

Control Group Instructions -0.045 -0.184, 0.094 0.033 -0.013, 0.079

Homicide 0.041 -0.169, 0.250 0.117 -0.013, 0.247

Sex Offense -0.003 -0.227, 0.221 -0.178* -0.325, -0.031

Other Violent Crime -.0137 -0.366, 0.093 -0.056 -0.161, 0.049

Other Non-Violent Crime 0.023 -0.241, 0.288 -0.103 -0.297, 0.090

Delay -0.053 -0.392, 0.286 -0.055 -0.197, 0.086

Strength of evidence - moderate compared to weak and strong 0.198* 0.069, 0.327

Strength of evidence - strong compared to weak and moderate 0.034 -0.191, 0.259

Deliberation 0.109 -0.027, 0.246

Trial presentation format -0.030* -0.059, -0.0004

Judicial admonition 0.057 -0.068, 0.182

Judicial instructions -0.107 -0.256, 0.043

Voir dire -0.061 -0.271, 0.150

Pretesting -0.080 -0.201, 0.042

Strength of Evidence k N r SE 95% CI

Weak 26 2977 0.086* 0.031 0.011, 0.159

Moderate 37 6162 0.213* 0.031 0.154, 0.270

Strong 4 1406 0.159 0.089 -0.017, 0.325

Crime k N r SE 95% CI

Civil 3 322 0.238* 0.118 0.013, 0.440

Homicide 13 1900 0.290* 0.055 0.193, 0.381

Other Non-Violent 7 1047 0.083 0.071 -0.052, 0.215

Other Violent 16 2943 0.188* 0.045 0.098, 0.275

Sex Offense 9 1349 0.078 0.063 -0.048, 0.201

Combination of Homicide and Other Crimes 16 2948 0.105* 0.045 0.015, 0.195

Unknown Combination 4 202 0.159 0.110 -0.053, 0.357

Q df p

Effect of moderators 3.89 9 0.92

Variance after moderators 22.63 27 0.70

Overall variance after random-effects 26.52 36 0.88

Q df p

Effect of moderators 25.16 17 0.006

Variance after moderators 52.36 49 0.35

Overall variance after random-effects 87.52 66 0.04

Remedies. Finally, exploratory analyses were performed on remedies. Though none of the remedy

variables showed significant effects in the meta-regression, this analysis did not allow the testing of what

Bruschke and Loges (2004) call the cumulative remedy hypothesis, which states that, while a single

remedy may show no significant effect, the effect of PTP may be reduced through use of a combination of

remedies. Average effect sizes were computed based on the number of remedies used (ranging from 0 to

4) as well as by combination of remedies used.

Remedies k N r SE 95% CI

None Overall 12 1962 0.159* 0.055 0.054, 0.2601 remedy

Admonition 10 1101 0.152* 0.063 0.034, 0.266Delay 7 964 0.106 0.077 -0.040, 0.248Deliberation 3 464 0.273* 0.105 0.073, 0.452Instructions 6 1187 0.157* 0.077 0.011, 0.296Overall 26 3716 0.142* 0.032 0.070, 0.214

2 remediesAdmonition + Delay 8 1160 0.169* 0.063 0.043, 0.291Admonition + Deliberation 1 80 0.139 0.197 -0.244, 0.484Admonition + Instructions 1 202 0.323 0.176 -0.012, 0.593Delay + Deliberation 11 1911 0.166* 0.055 0.054, 0.274Delay + Instructions 1 168 -0.035 0.179 -0.369, 0.308Deliberation + Voi r Dire 1 156 0.111 0.182 -0.239, 0.436Overall 23 3677 0.175* 0.045 0.099, 0.248

3 remediesAdmonition + Delay + Deliberation 3 372 0.223 0.122 -0.013, 0.435Delay + Deliberation + Voi r Dire 1 68 0.228 0.205 -0.167, 0.559Overall 4 440 0.248* 0.100 0.058, 0.420

4 remediesAdmonition + Delay + Deliberation + Instructions 1 702 0.246 0.167 -0.076, 0.521Admonition + Delay + Deliberation + Voir Dire 1 48 0 0.221 -0.407, 0.407Overall 2 750 0.160 0.134 -0.100, 0.397

Trial Presentation Method k N r SE 95% CI

Brief Summary 15 2234 0.147* 0.045 0.053, 0.239

Transcript 11 1603 0.171* 0.055 0.058, .279

Audio 7 1145 0.154* 0.071 0.020, 0.283

Video 25 4234 0.194* 0.032 0.123, 0.263

Mock Trial 3 144 0.037 0.138 -0.227, 0.295

Actual Trial 6 1185 0.095 0.084 -0.065, 0.250

References

Sources used in the meta-analysis are available at: http://saramhouse.bravehost.com/research/ptpmetaanalysis.htmlBorenstein, M. Hedges, L.V., Higgins, J.P.T., & Rothstein, H.R. (2009). Introduction to meta-analysis. West Sussex, UK: John Wiley & Sons.Bruschke, J., & Loges, W.E. (2004). Free press vs. fair trials: Examining publicity’s role in trial outcomes. Mahwah, NJ: Lawrence Erlbaum.Chesterman, M. (1997). OJ and the dingo: How media publicity relating to criminal cases tried by jury is dealt with in Australia and America. The American Journal of Comparative Law, 45, 109-147.Cohn, L.D., & Becker, B.J. (2003). How meta-analysis increases statistical power. Psychological Methods, 8, 243-253. doi:10.1037/1082-989X.8.3.243Cooper, H. (2010). Research synthesis and meta-analysis: A step-by-step approach (4th ed.). Thousand Oaks, CA: SAGE Publications.Curtner, R., & Kassier, M. (2005). “Not in our town”: Pretrial publicity, presumed prejudice, and change of venue in Alaska: Public opinion surveys as a tool to measure the impact of prejudicial pretrial publicity. Alaska Law Review, 22, 255-

292.Dixon, T.L., & Linz, D. (2002). Television news, prejudicial pretrial publicity, and the depiction of race. Journal of Broadcasting & Electronic Media, 46, 112-136.Hedges, L.V., & Olkin, I. (1985). Statistical methods for meta-analysis. Orlando, FL: Academic Press.Imrich, D.J., Mullin, C., & Linz, D. (1995). Measuring the extent of prejudicial pretrial publicity in major American newspapers: A content analysis. Journal of Communication, 45, 94-117.Jones, R.M. (1991). The latest empirical studies on pretrial publicity, jury bias, and judicial remedies: Not enough to overcome the first amendment right of access to pretrial hearings. American University Law Review, 40, 841-.Lipsey, M.W., & Wilson, D.B. (2001). Practical meta-analysis. Thousand Oaks, CA: SAGE Publications.Steblay, N.M., Besirevic, J., Fulero, S.M., & Jimenz-Lorente, B. (1999). The effects of pretrial publicity on juror verdicts: A meta-analytic review. Law and Human Behavior, 23, 219-235. doi:10.1023/A:1022325019080Studebaker, C.A., & Penrod, S.D. (2005). Pretrial publicity and its influence on juror decision making. In N. Brewer and K.D. Williams (Eds.), Psychology and law: An empirical perspective (pp. 254-275). New York: Guilford Press.Wilson, D.B. (2002). Meta-analysis macros for SAS, SPSS, and Stata. Retrieved September 9, 2009, from http://mason.gmu.edu/~dwilsonb/ma.html

Strength of evidence. Though the 95% confidence

intervals of weak and strong do overlap, the 90%

confidence intervals do not, meaning that these two values

can be considered significantly different at the 0.10 level.

As expected, studies using moderate case evidence had

the largest effect.

Exploratory Analyses of Posttrial Effect Sizes

Crime. Crime also appears to

have a strong influence on effect

sizes. Specifically, non-violent

crimes and crimes involving

sexual assault produce smaller

effect sizes. There are two

possible explanations for the

smaller effect sizs in sexual

assault cases: 1) PTP publishes

negative information about the victim, and 2) jurors are unsympathetic to victims of sexual assault. Since sexual

assault was not a significant moderator pretrial, but was posttrial, the evidence points to explanation 2.

Trial presentation method. Finally, since trial

presentation method was found to be a significant

moderator in the regression, average effect sizes were

computed at each level of presentation. Studies using

mock trials or actual trials did not find effect sizes

significantly larger than 0, though these effect sizes are

also based on only a few studies, and could suffer from

low power.