Upload
others
View
0
Download
0
Embed Size (px)
Citation preview
Poor Usability Is
Undermining Disclosure:
A Report on the Fifty States'
Campaign Finance Websites
Michael J. Malbin
Justin A. Koch
PRACTICAL AND OBJECTIVE. RESEARCH FOR DEMOCRACY.
The Campaign Finance Institute is the nation’s pre-eminent think tank
for objective, non-partisan research on money in politics in U.S.
federal and state elections. CFI’s original work is published in scholarly
journals as well as in forms regularly used by the media and policy
making community. Statements made in its reports do not necessarily
reflect the views of CFI's Trustees or financial supporters.
To see all of CFI’s reports and analysis on money in
politics visit our website at www.CFInst.org
BOARD OF TRUSTEES: Anthony J. Corrado – Chair
F. Christopher Arterton
Betsey Bayless
Vic Fazio
Donald J. Foley
George B. Gould
Kenneth A. Gross
Ruth S. Jones
Michael J. Malbin
Ronald D. Michaelson
Ross Clayton Mulford
Phil Noble
ACADEMIC ADVISORS: Robert G. Boatright
Richard Briffault
Guy-Uriel Charles
Diana Dwyre
Erika Franklin Fowler
Michael Franz
Donald P. Green
Keith Hamm
Marjorie Randon Hershey
David Karpf
Robin Kolodny
Raymond J. La Raja
Thomas E. Mann
Lloyd Hitoshi Mayer
Michael G. Miller
Costas Panagopoulos
Kay Lehman Schlozman
ABOUT THE AUTHORS
Michael J. Malbin, co-founder and Executive Director of the Campaign Finance Institute (CFI), is also
Professor of Political Science at the University at Albany, State University of New York. Before SUNY he
was a reporter for National Journal, a resident fellow at the American Enterprise Institute and held
positions in the House of Representatives and Defense Department. Concurrent with SUNY, he has been
a member of the National Humanities Council, a visiting professor at Yale University and a guest scholar
at The Brookings Institution. His co-authored books include The Day after Reform: Sobering Campaign
Finance Lessons from the American States (1998); Life after Reform: When the Bipartisan Campaign
Reform Act Meets Politics (2003); The Election after Reform: Money, Politics and the Bipartisan
Campaign Reform Act (2006) and Vital Statistics on Congress.
Justin Koch is a full-time Research Analyst at CFI, and Ph.D. student in the Department of Government
at Georgetown University. His dissertation in progress is on networking in political party financing.
Previously, Justin worked in political fundraising and interned for Stephen Timms, MP in the UK
Parliament, as well as for the U.S. House Committee on Education and Labor.
ACKNOWLEDGMENTS
The Campaign Finance Institute gratefully acknowledges the support of The Democracy Fund,
The William and Flora Hewlett Foundation, The John D. and Catherine T. MacArthur
Foundation, The Mertz Gilmore Foundation, The Rockefeller Brothers Fund, and the Smith
Richardson Foundation for their support of CFI’s work.
The authors wish to acknowledge the help of the Campaign Finance Institute’s Michael Parrott
and Brendan Glavin for their help with using Mechanical Turk as well as in analyzing and
shaping the survey data.
Finally, the authors wish to acknowledge the Council of Government Ethics Laws (COGEL). An
initial draft of this paper was presented at COGEL’s annual conference in December 2015 in
Boston, Massachusetts.
1
Poor Usability Is Undermining Disclosure:
A Report on the Fifty States' Campaign Finance Websites
By Michael J. Malbin and Justin A. Koch
CONTENTS
EXECUTIVE SUMMARY 2
INTRODUCTION 3
RESEARCH DESIGN 5
RESULTS AND ANALYSIS 9
CONCLUSIONS 17
RECOMMENDATIONS 19
WORKS CITED 21
APPENDIX 1: TABLES 22
APPENDIX 2: FULL SURVEY 34
APPENDIX 3: STATE WEBSITE URLS 42
2
EXECUTIVE SUMMARY
One of the most important arguments made in favor of campaign finance disclosure is
that the information can be useful to voters. But just because a candidate or political
committee sends information to the government does not mean that the information gets out
to voters effectively. Disclosure systems involve long chains of discrete steps that begin with
legal requirements and end with the informational product’s end use and consequences. Most
of the focus on disclosure in recent years understandably has been about the legal
requirements. As important as these requirements may be, the promise cannot be achieved
unless legal requirements are put into digestible formats by the agencies that implement the
laws.
This report concentrates on the other end of the policy chain – on the ability of end
users to gain basic information from the fifty states’ campaign finance websites. It looks at the
experience not of the power user – the person able to download masses of data and analyze
them – but of the non-specialist, the person most like the voters whom disclosure systems
were intended to benefit. We recruited nearly 2,000 experienced Internet users through
Amazon’s Mechanical Turk to make 5,000 discrete, randomly assigned visits to the fifty states’
campaign finance websites. We found that even though many states have improved their
online disclosure systems, the actual usability of the data has often been neglected. In almost a
full third of those visits, our participants were not able to complete a set of basic data searches
within 10 minutes. They fared only slightly better in terms of accuracy, completing an average
of about 54% of their tasks correctly. However, there were important and wide-ranging
disparities across states. A small number performed well, but a majority had either mixed
results or performed poorly across the board1.
These results should be of concern to state policymakers and reformers. In the final
section of this paper, we offer eleven bullet points with practical suggestions for improving
states’ websites. These recommendations are not meant to be exhaustive. More important is
the spirit that guides them. One of the fundamental purposes of disclosure is to inform citizens.
Before the internet, almost all of this had to be accomplished through intermediaries. The
internet has made it possible for agencies to make useful information available directly. Our
project has shown that most state agencies fall far short of best practices. However, a few
states consistently did well. The fact that they did means that others can too. To do so, the
states need to learn from each other. They also need to open themselves up to the perspectives
of citizens who are not campaign finance or political professionals. Improvement will only come
when their voices and needs get the attention they deserve.
1 See Appendix 1 for detailed comparative scores. For a list of each state’s complete scores, see
http://cfinst.org/state/research/usability_stateresults.aspx
3
INTRODUCTION
One of the most important arguments made in favor of campaign finance disclosure is
that the information can be useful to voters. The Supreme Court has repeatedly supported the
government’s interest in providing voters with information about election-related spending,
including in Citizens United v. FEC (2010). The potential uses for this information will be varied:
Voters might use disclosure to assess the connections between campaign contributors and
office holders (Malbin and Gais 1998). Or, knowing the donors behind an independent
advertising campaign might help the voters decide whether to believe the advertisement’s
claims (Dowling and Wichowsky 2013 and 2015). We will not adjudicate whether or to what
extent voters in fact use disclosure information. Our project instead focuses on a key
prerequisite for the voters’ very ability to use this information. Voters cannot factor campaign
finance information into decision-making if they cannot find it or make sense of it.
Simply mandating disclosure does not ensure that the information will be readily
available. In a 1998 book, The Day after Reform, Malbin and Gais outlined five criteria that
must be fulfilled for disclosure to work: (1) candidates and political organizations must file their
reports accurately, (2) these reports must include information about activities and relationships
that are important to voters, (3) the reports must be available in a useful format and at an
accessible location, (4) interested, knowledgeable people must read these reports and then
make the information available to voters, and finally (5) the voters must be able and willing to
use this information to make a voting decision (Malbin and Gais, 1998:36).
This report investigates the extent to which the third criterion – that reports are available
in a useful format and accessible location – is met across fifty state disclosure websites. Not too
long ago, an individual interested in state campaign finance data would have to view the
reports at a state campaign finance office, or ask for copies, or rely on intermediaries
(policymakers, academics and journalists) to interpret the material for them. Because most
people do not live near their state agency’s office, direct access was all but impossible. This
changed with the internet, which is perhaps the most accessible “location” there is. The Pew
Research Center has been documenting internet access in the U.S. with nationally
representative surveys since 2000. In 2015 84% of American adults said they used the internet
or e-mail at least “occasionally,” up from 52% in 2000 (Pew 2015). It is no surprise, therefore,
that every state maintains some form of an online campaign finance database.
But having an internet presence is not the whole story. The question remains whether
the information is available in a useful format. The answer will be different not only for
different states but for different types of users. The above criteria suggested two mechanisms
through which the public might learn. The first was through intermediaries. We can think of
policymakers and academics as the “power users” of disclosure data — the ones most likely to
download disclosure data in bulk, perform statistical analyses, and create graphical
representations. All of this can contribute toward informing the public. For these
intermediaries, agencies will need to make sure there is accurate information available for bulk
downloads, coupled with clear definitions of the terms used to define each of the data fields.
4
The power users, however, are less likely to perform simple searches such as looking up how
much a given Governor raised, or who was the Governor’s biggest donor.
The remaining intermediaries – journalists and candidates – are likely to engage in some
mix of downloads and simple searches. And simple searches will be the primary mechanism
used by the general public. Maintaining access for these users poses special challenges.
Whereas policymakers, academics, and journalists are likely to have enough familiarity with
government websites and campaign finance terms to navigate through disclosure data
effectively, the typical non-specialist is much less likely to have that information. Indeed, many
of the open comments we received suggested that our participants had difficulty understanding
basic campaign finance terminology. They found the terms on a website either to be
“somewhat confusing” or “very difficult to understand” in 46% of our submissions. Surprisingly,
educational background was not a statistically significant predictor of a participant’s self-
reported difficulty in understanding campaign finance terms (p=.25). This highlights the
difficulty of designing websites most citizens can understand.
The most recent study to look at disclosure website usability was published eight years
ago – the equivalent of generations in website design. The “Grading State Disclosure” studies
were conducted by the Campaign Finance Disclosure Project2 in 2003, 2004, 2005, 2007, and
2008 to engage in a “comprehensive assessment of state campaign finance disclosure laws and
practices.”3 For our purposes, the most relevant aspect was the “online and contextual
usability” portion, in which the authors recruited undergraduate students from the University
of California, Los Angeles, to test the states’ websites in an experimental setting. Newer
platforms have made it possible to improve on these older studies by recruiting from an online
population that is more diverse than an undergraduate student sample (Peterson 2001;
Berinsky et al. 2012).
Other studies of state level campaign finance disclosure have been conducted by both
the National Institute for Money in Politics (NIMSP) and the Center for Public Integrity (CPI).4
While these reports examine a number of important features of the disclosure laws across the
states, we will present evidence later to show that usability is a separate dimension that
requires more attention from reformers and policymakers. States with more usable websites do
not necessarily have stronger disclosure laws. States with websites that are reasonably
accessible may score poorly on other aspects of electronic disclosure, and vice versa. We also
learned that accessibility is not the same thing as high end design. Some of the most accessible
websites had “old fashioned” looks about them. That does not mean that an accessible website
2 The Campaign Finance Disclosure Project was a project of the UCLA School of Law, the Center for Governmental
Studies, and the California Voter Foundation. 3 http://campaigndisclosure.org/gradingstate/execsum.html
4 The NIMSP study ranks states for the quality of their contribution disclosure laws in 2015, whereas the CPI study
ranks states based on an index of accountability, political financing, lobbying disclosure, and other factors. Links
here: CPI (http://www.publicintegrity.org/2015/11/09/18822/how-does-your-state-rank-integrity); NIMSP
(http://www.followthemoney.org/research/institute-reports/scorecard-essential-disclosure-requirements-for-
contributions-to-state-campaigns-2016)
5
has to look unattractive, or perform poorly for power users. As we shall learn, these dimensions
are independent of each other and need to be treated as such.
RESEARCH DESIGN
To test the states’ websites, we recruited participants using Amazon’s Mechanical Turk
software. In the end, we recruited 1,905 unique participants who accounted for 5,000 survey
submissions. Participants during the fall of 2015 were told to complete no more than five
states. The states were assigned randomly to produce approximately 100 submissions per state.
Because we were concerned that accuracy would improve, and completion times decrease, as a
person did more states, we also kept track of the order in which the participants completed
their states and how many states they completed overall. When our analysis showed that both
made little difference, we removed the five-state limitation for the final three states, which
were competed in early 2016 because the states held elections in 2015.5 Participation was also
restricted to persons who lived in the United States and who had a good previous track record
on Mechanical Turk.6 Each participant was paid $1.00 to complete an individual survey. Since it
took the recruits, on average, a little more than 15 minutes to complete the entire survey, this
amounted to an hourly rate of $4/hour. While much lower than national or state minimum
wages, it is considered a fair wage by Mechanical Turk participants.7
The decision to use Mechanical Turk, which launched in 2005, was a departure from the
usability section of the Grading State Disclosure (GSD) project. The GSD project (like many
academic studies) administered its test in person to undergraduate students in a laboratory
setting. Mechanical Turk is administered online, with participants engaged at a time and place
of their choosing. In this respect, it more closely resembles the way citizens would normally use
state disclosure websites. While Mechanical Turk may present some limitations (see Fort et al.
2011), it provides a convenient online sample that is more representative than an in-person
student sample, particularly with respect to the recruits’ age and education levels. It is also
much less costly than other nationally representative samples recruited online, such as those
recruited through the American National Election Studies (Berinsky et al. 2012, Table 3).
5 We were able to track each user’s overall number of submissions through their unique Mechanical Turk ID. We
were initially interested in whether users would get better at searching for data as they went along, but we did
not find much relationship in either direction. Bivariate linear regression results indicate that submitting one
additional survey is associated with a roughly 1 percentage point increase in overall accuracy across users.
Submitting one additional survey was also associated with completing tasks in less time, but this result was
substantively insignificant. Overall, there was no difference in the average survey experience of users across
state assignments (see Table A1 in Appendix 2). 6 We required that each user have already completed a minimum of 1,000 tasks on Mechanical Turk with at least
an 85% approval rate. 7 Our view that this was considered a fair wage was corroborated by a consensus of positive reviews we received
on “Turkopticon,” a database of Mechanical Turk reviews maintained by the University of California, San Diego.
6
The Mechanical Turk recruits did not perfectly mirror the U.S. population, as reported
more fully below. The disparity was particularly noticeable with respect to age because
Mechanical Turk attracts a much lower proportion of senior citizens than is present in the
general population. However, this should not affect the result of this project. Because users
were assigned to states randomly, unrepresentative characteristics should be distributed more
or less evenly across state assignments. On the demographic measures we included in our
survey, this proved to be the case (see below). Given the demographic balance across state
assignments, we can say with measured confidence that the differences in usability and
accuracy among the states were due to the states’ websites, and not to differences among the
participants assigned to each state.
Our questionnaire mirrored the likely knowledge and attention span of a citizen who
might be looking for basic campaign finance information. The forms were brief and the
questions were simple. Pretests were administered, and questions were redesigned in light of
comments from the pretests’ users. We used Google Forms to administer the survey. This is a
form of free software with which online participants were likely to be familiar and
comfortable.8
The survey contained three sections: (1) demographic questions, (2) campaign finance
questions to be answered using information on the states’ websites, and (3) users’ evaluations
of the websites. The demographic questions were straightforward ones about gender, age,
state of residence and level of education. We used these to verify that the random assignment
process yielded a balanced sample across states. We would also have been able to control for
these characteristics in regression analyses if our randomization process had failed to create
demographic balance across the treatment groups.
Campaign finance information: The second segment asked participants to look up and
report on some basic campaign finance information:
• Respondents were asked at this point to begin timing themselves. They were told that
we would ask them to report how long it took to complete their tasks. We also said that
if it took them more than ten minutes, they should answer “Could not find” on any
remaining tasks and move on to the evaluation section.
• The participants were given links to the states’ disclosure websites9 as well as a list of
current Governors from the National Governors Association.
• They were asked when the most recent gubernatorial election occurred in their assigned
states.
• They were then asked “how much money did the Governor of your assigned state
receive in total contributions in his or her most recent past election.”
8 We received no negative user complaints or feedback about the use of Google Forms.
9 The Grading State Disclosure studies required participants to find the websites. We reasoned that the state
agency in charge of disseminating disclosure data may not have much control over how easy it is to find their
website using a search engine.
7
• The final task asked the participants to find a donor to the Governor’s campaign. The
specific request varied with the state’s website design. To begin, they were asked to
find a list of contributors to the Governor’s most recent campaign.
o If they were not able to find such a list, they were sent directly to the feedback
portion of the survey.
o Participants who could locate a list of donors were asked if it could be sorted by
size of contribution.
o If the list could not be sorted, the participant was asked to provide the name of
any donor and the amount given. This participant was then sent to the feedback
section.
o If the list could be sorted, the participants were asked whether the list showed
only single contributions for each donor (e.g. John Smith gave $35 to Governor
Johnson on 4/3/2013), or the total contributions for each donor (e.g. John Smith
gave a total of $3,000 to Governor Johnson in 2013), or both.
o They were then asked for the name of the single biggest donor and the amount
that the donor gave – with the expected answer depending upon whether the
website listed single contributions or aggregate amounts by donor.
Those were all of the campaign finance tasks we put forward: find the relevant place on the
website for information about the sitting Governor’s most recent election; find out how much
the Governor raised; identify a donor. We also coded the responses as being either correct or
incorrect to see if accuracy varied by state.10
Feedback and evaluation: Once the participants either completed all tasks or reached
the ten minute maximum time, they were sent to the feedback portion of the survey.
• This portion began by asking them how confident they were in their answers.
Confidence is an important factor in assessing usability. Voters are unlikely to use
disclosure information for their voting decisions if they are not confident they have
correctly identified what they have sought.
• The second question asked whether terminology on the assigned state disclosure
website was “easy to understand or was it confusing”?11
• Participants were next asked whether they would be likely to use this state’s disclosure
website again. If users felt they had learned something from the website, or that it was
fairly easy to use, they presumably would be more likely to return to it.
10
For all tasks except the one asking for the Governor’s total contribution amount, we considered only one answer
to be correct for each state. We made an exception for the Governor’s total contribution amount because
several states did not provide a pre-calculated sum of contributions to the Governor, leaving it to the user to
manually calculate the total contributions received by the Governor in his or her most recent election. We
considered any answer correct if it fell within a 20% margin of the real total (-10%/+10%). As we will see in the
results section, even this wide margin of error did not result in impressive accuracy results. 11
Websites that offer explanations of campaign finance terms should rank higher on this usability measure than
those websites that do not. Having definitions of key terms explained to members of the public may help in their
efforts to understand disclosure information.
8
• They then were asked to rate their overall experience on the state’s website on a scale
of 1 to 5, with 1 being awful and 5 being excellent.
• The fifth feedback question asked participants to estimate (in ranges) how long it took
to complete the tasks. Users who stopped completing tasks because they had reached
their ten minute limit were instructed to choose the final answer – more than ten
minutes.12 We interpret the percentage not able to finish the tasks within a ten minute
window as an indication of usability, since few casual users are likely to remain with a
website longer than that if they cannot find the answers to their questions.
• Finally, we included an open feedback question on which the participants were
encouraged, but not required, to comment on any aspect of the state’s website. They
elected to provide substantive feedback of some form in approximately 42% of our
submissions, with some responses offering very specific criticism and praise of their
assigned states.
The full survey with questions and answer choices is available in Appendix 2.
12
We coded each user for the number of tasks that she or he completed. This ranged from a minimum of 4 tasks to
a maximum of 6 tasks. We did this to make sure that users who finished more quickly did not do so simply
because the website contained less information and therefore permitted them only to complete a smaller
number of tasks. Bivariate linear regression analysis shows that this is not likely to have occurred: the more tasks
that a user completed, the less time they tended to spend on tasks overall (Coefficient=-.37, p=0.001).
9
RESULTS AND ANALYSIS
Demographics
We begin with the demographic characteristics of our Mechanical Turk sample. We
were successful in recruiting a wider age spectrum than previous campaign finance website
studies, but the Mechanical Turk pool of users is still a much younger population than the
American population in general (see Figure 1 below). Since Mechanical Turn participants have
to be at least eighteen, the bars bases its percentages on the population of those 18 years old
or more. The percentage of our respondents in the 18-24 and 35-49 age brackets was roughly
equivalent to that in the 2012 U.S. Census. However, the 25-34 year olds were overrepresented
by 30 percentage points, and those who were 50 or older were underrepresented by 33
percentage points. Mechanical Turk requires a degree of comfort with navigating websites,
which causes a skew towards younger users. In fact, even older users of Mechanical Turk may
be more tech-savvy than average, suggesting, prima facie, that our results should be biased in
favor of finding greater usability.
In terms of education, our survey pool was more highly educated than the U.S.
population as a whole, something we expected based on previous studies using Mechanical
Turk (e.g, Christenson and Glick 2015, Table 1). While individuals with no college education
Figure 1: Age
U.S. General Population (light) vs. Survey Pool (dark) (18+)
10
represent 42% of the American population, they made up only 12% of our survey pool.
Individuals with some college or a non-graduate degree represent 48% of the American
population, while they were 77% of our survey pool. Individuals with a graduate degree in our
survey pool roughly matched their proportion in the U.S. as a whole. While this was an
unrepresentative sample of education levels in the U.S., it was more representative than an in-
person undergraduate sample, giving us a stronger case for the external validity of our results.
We also compared the state residences of our users to the 2012 U.S. census (see Table
A2 in Appendix 1). In this case, our pool matched the U.S. population much better. Only in three
cases did a given state’s proportion among our participants differ by more than one percentage
point from its proportion of the U.S. population.13 Finally, our survey pool’s gender breakdown
was slightly different from that of the U.S. population at large, with 53% male and 47% female
respondents. This flips the numbers in the U.S. population, which are 52% female and 48%
male.
Overall, while there were important demographic differences between the U.S.
population and our survey pool, our sample was more representative than other convenience
samples. Most importantly, randomly assigning the users to different states succeeded in
generating balance in our demographic variables across treatment groups (see Tables A3, A4,
and A5 in Appendix 1). Thus, any difference in results can be more directly attributed to
qualities of the state websites and not to differences in demographics or other underlying
variables across state assignments.
Time to Complete Tasks
One good way to know whether non-specialist Internet-savvy participants find a website
usable is to measure how long it takes them to complete some simple tasks. We did not think
most people in the middle of an election season would be willing to spend more than ten
minutes looking for a few basic facts. Therefore, we designed tasks we thought the participants
could finish in less than ten minutes. We also asked them to stop if they could not complete
within that length of time. Figure 2 (below) depicts the percentage in each state that either
spent more than ten minutes or gave up after ten minutes (as instructed). Darker shades
indicate the states with higher percentages taking more than ten minutes; lighter shades are for
the states in which the most people completed the tasks. The same information is also
presented in Appendix 1, Table A6, with states listed in rank order with their complete
numerical results. Table A6a shows the same question ranked by the average numbers of
minutes per user. The states line up in pretty much the same order in the two lists.
13
Texas and California were underrepresented by 2 percentage points. Florida was overrepresented by 2
percentage points.
11
Figure 2: Time to Complete Tasks
Overall, our participants were unable to complete within ten minutes in 32% of the
submissions. The median states (Nebraska and Mississippi) were at 30% and 31%. However,
there was a fair amount of variability across states (SD=12.8 percentage points). Washington,
Minnesota and West Virginia performed particularly well, with only about 10% of their assigned
participants needing more than ten minutes. At the other end, more than half of the
participants for North Carolina, Maryland, Texas, and New Hampshire, failed to complete within
the same time frame.
These differences are likely to have real world consequences. In the open comments,
one respondent said about one low-scoring state that the “website is a labyrinth…. A member
of the general public will give up after only a few minutes.” Another wrote this about a mid-
ranking state: “I think it would be hard for an average person to find an answer…. Most people
do not want to spend that much time.” These comments contrast sharply with ones about the
best performing states. Washington’s website was described as “very easy to use.” Alabama’s
was “easy to navigate.” New Mexico’s was “well laid out,” and Minnesota’s was “super simple.”
Participants in the high performing states found what they needed; ones in the low performing
states were frustrated. We believe that all states would do well to learn from the practices of
the top performing states.
12
Confidence
While the time to complete tasks is one important measure of usability, we also wanted
to know whether our respondents felt confident about the answers they gave. These results are
shown in Figure 3. Darker shades are for the states whose respondents were the most
confident. (For the rankings and individual state scores, see Table A7 in Appendix 1.) The mean
score was about “3” across states, translating to users being “somewhat confident”. There was
also a fair amount of variation across states (SD=.25). The top three states were New Mexico,
Minnesota, and Washington. Their scores were almost a full standard deviation higher than
those for the fifth best ranked state. The bottom three (Texas, North Carolina and Alaska) were
every bit as much below the average as the top ones were above.
Figure 3: Confidence
Understanding
The lack of confidence respondents felt about their answers may be related to their
inability to understand the sometimes technical campaign finance terminology on the states’
websites. Their understanding of website terminology is shown in Figure 4. (See Table A9 in
13
Appendix 1 for the rankings.) This map again shows New Mexico, Minnesota and Washington
among the top states, with Washington averaging 3.5, or about halfway between “somewhat
easy to understand” and “very easy to understand.” States exhibited even more variation on
this measure (SD=.37) than on the confidence measures, with the bottom three states scoring
below 2, which translates to “somewhat confusing.” To illustrate: one participant described
one of the lower ranked websites as having “no layman terminology that told me what I was
looking for or what the forms were…. [It] was very confusing and technical.” Another said of a
different state that “more transparency could be achieved if there were a dictionary of terms to
put the jargon into layman’s language.” Our unsystematic review would say the same about
many states. The terminology too often seems geared to lawyers and campaign professionals. It
is admittedly difficult for full-time campaign finance professionals to remember what the
uninitiated user does not know. But if the goal is to inform citizens, their voices will have to be
solicited and heard.
Figure 4: Understanding of Terminology
Use It Again?
If a state wants its website to be used, it should not discourage visitors from wanting to
come back again. We therefore asked our participants whether they would return to a state’s
website if they needed to look up campaign finance information. Figure 5 below (and Table A9
in Appendix 1) shows the responses. On average (mean=2.38), the responses were closer to
"not very likely” than “somewhat likely”. This is discouraging. Indeed, nine of the states were
below 2.0. But not every state had such a negative result. The respondents for each of the top
14
three states said they were more than somewhat likely, on average, to visit that state’s website
again. This indicates that it is indeed possible to design a disclosure database that will
encourage people to return.
Figure 5: Would You Come Back Again?
Overall Ratings
Figure 6 (below) depicts our broadest measure in which our participants graded their
overall experience on their assigned website. (See Table A10 in Appendix 1 for the states’
rankings.) This measure exhibited more variation across states than any other in this study
(SD=.52). States were graded from 1 to 5, with 5 being the best score. The statistical mean was
2.7. This was slightly above average on a scale from “awful” to “excellent.” Once again,
Minnesota, Washington, and New Mexico scored well above their peers. Minnesota’s average
score of 4.07 was particularly high.
15
Figure 6: Overall Rating
Accuracy
Of course, it is possible for respondents to feel confident about their results while being
wrong. An agency’s goal should be for them to feel confident about correct answers. We coded
all of the task answers as either correct or incorrect to get a sense of how accurate the
participants were across the states. The result for each state appears in Figure 7 below and in
Appendix 1 (Table A11). The differences across states are clearly due to the websites and not
to variations among the respondents. The table shows quite a bit of variation, with a high of
71% (Washington) and low of 40% (shared by several states). The mean was 54%. A full
fourteen states scored below a 50% accuracy rate on average. By any democratic standard, this
represents a failure in the part of the disclosure websites.
It is interesting to note that the participants subjective ranking of their overall
experience correlated strongly with our objective grading of the users’ accuracy (see Figure 8).
This tells us that the subjective responses highlight real concerns to which states should be
paying attention.
16
Figure 7: Accuracy
Figure 8: Relationship between Objective Accuracy and Participants’ Overall Ratings
17
CONCLUSIONS
Overall, these results suggest that most states have a long way to go. We should
remember that we were asking simple questions about the states’ most highly visible
candidates. If our participants could not find this information about incumbent Governors, we
shudder to think what the results would have been for less visible candidates. Despite these
results, we were encouraged to see the consistently high grades given to a few states, including
Washington, Minnesota, and New Mexico. It is hard not to notice that each of these had
relatively straightforward user interfaces and that their simplicity of use clearly had no
relationship to artistic aesthetics. In fact, many of the websites that seemed the most attractive
visually did not achieve high usability marks (e.g. Hawaii and Vermont).
More generally, our results point to the fact that usability is a distinct and important
aspect of disclosure that deserves special emphasis by state administrators. We looked for any
correlation between our state website usability rankings and the ranking of state disclosure
scores from the most recent study by the National Institute for Money in State Politics (NIMSP
2016). The NIMSP scores were heavily weighted toward disclosure laws and agency rules. It
turns out that there is no correlation at all between the NIMSP scores and ours (see Figure 9
below). Eleven states received the highest possible scores from NIMSP for independent
expenditure disclosure. These same eleven states were ranked 13th, 21st, 22nd, 24th, 33rd,
34th, 37th, 40th, 41st, 43rd and 48th by our participants’ for overall satisfaction,. This is not
meant to downplay the importance of laws, rules, and the other aspects of disclosure measured
by NIMSP. It does mean that strong laws and rules will not by themselves ensure that
information will be accessible to the public. This is a distinct element of policy implementation
that needs to be given special consideration on its own terms.
18
Figure 9: Relationship between CFI Usability Rank and NIMSP Scorecard Rank
We would urge state administrators to pay conscious attention to the needs of different
populations as they design their disclosure presentation systems. Election lawyers have
different needs from power data users. Each of these has different needs from the non-
specialist. Downloading massive Excel or .csv files is not a realistic option for most people who
need or want simple information quickly. These people may be willing to spend ten or fifteen
minutes, but a large number of our recruits could not find what they wanted in that length of
time. And the Mechanical Turk participants were probably more adept at using websites than
the general population. The fact that so many states got such poor grades from even these
respondents tells us something about the scope of the problem. The open feedback told us that
the respondents were often surprised by how tough even these simple tasks were to complete.
If we had not paid them, it is hard to imagine them going through 10 minutes or more of
searching. Even when they could find some data, their responses too often were wrong.
19
RECOMMENDATIONS
The obvious next question is, what should states do? Our surveys were not designed to test
recommendations for improving disclosure websites. However, the participants’ responses,
combined with our own impressions, do leave us with opinions about how states can improve
their results.
• First, and most important, is that state agencies should take the needs of non-specialist
citizens more seriously. These citizens pay for the systems through their taxes. The
systems are meant to provide them with information they can use before voting. So the
bottom line is that their needs should be served.
• These citizen-users do not have the time or inclination to learn technical campaign
finance law before they try to look up some basic facts. The website should define all of
the important campaign finance terms in the kind of non-technical language that a non-
specialist can understand. There should also be an FAQ page that lets a person drill
down to more specific questions.
• Many home pages send visitors on different paths depending upon their identities.
Candidates may be sent to one page, political committees to another, and journalists to
a third. This can be useful. However, there is an alternative worth considering – choices
defined not by the visitor’s identity, but by what the visitor wants to know. Does she or
he want to know about candidates? Political parties? Independent spending?
Whichever choice the visitor makes should not put that person into separate silos for
data, regulations or press releases. The visitor should be able to move from data to law
to press releases while keeping the subjects narrowed to the key term of their choice.
• Search mechanisms should contain clear instructions for less experienced visitors. The
sites should not rely on visual, computer-specialist shortcuts.
• Starting a search should be easy. Visitors should not have to know a candidate’s or
committee’s exact name. At a minimum, the database should suggest alternatives for
misspelling. Better would be to let the visitor enter only a few letters and then be given
a list of possibilities. This has the advantage of letting the visitor see the names of other
committees with similar names.
• Many visitors may not know even this much about a filer’s official names. Visitors
therefore should be able to enter their own home addresses and be given a list of all
candidates running to represent that address, grouped by office. The candidates’ names
should be hotlinks to get to the candidates’ reports.
• The candidates’ reports, as well as all other data, summary reports, and press releases,
should be kept online for previous as well as current elections.
• The results should be presented in flexible formats that will allow visitors to see
summary amounts as well as specific transactions.
20
• The data should be available in standard, downloadable formats for power users. This
should include a codebook or data dictionary defining each field.
• The data should also be presented in a simple spreadsheet format for each filer. The
spreadsheet should be sortable by clicking on the headings of at least the following
fields: donors’ names, donors’ employers, donors’ zip codes, size of contributions,
contribution dates, and donors’ aggregate contributions to this recipient.
• Finally, we know this list of recommendations is only a beginning. Users should be able
to offer comments or ask questions from any of the websites’ pages. These comments
should be taken seriously by staff. Questions of general interest should be added to the
FAQ. More serious questions may call for tweaking the website itself.
These recommendations are not exhaustive. More important than any recommendation
is the spirit that guides them. The assumption behind this project has been that one
fundamental purpose of disclosure is to inform citizens. Before the internet, almost all of this
had to be accomplished through intermediaries. The internet has made it possible for agencies
to fulfill this key part of their mandate by making important information available directly. Our
results show that most state agencies fall far short of best practices. This is understandable.
Election lawyers, journalists, and power users will complain when a website leaves them
unhappy, but citizens who come to the website less frequently are not likely to speak out.
Agencies short of money and personnel naturally respond to their most vocal critics. But the
ultimate beneficiaries are supposed to be the voters. Disclosure agencies need to do a better
job of serving their needs.
21
WORKS CITED
Berinsky, A. J., G.A. Huber, and G.S. Lenz. (2012). Evaluating Online Labor Markets For Experimental
Research: Amazon.com's Mechanical Turk. Political Analysis, 20(3), 351-368.
Campaign Disclosure Project. (2008). Grading State Disclosure 2008. A Report of the Campaign
Disclosure Project, a collaboration of the UCLA School of Law, the Center for Governmental Studies,
and the California Voter Foundation. http://campaigndisclosure.org/gradingstate/execsum.html
Center for Public Integrity. (2015). State Integrity 2015: How Does Your State Rank for Integrity?
https://www.publicintegrity.org/2015/11/09/18822/how-does-your-state-rank-integrity
Christenson, D. P., and D.M. Glick. (2015). Chief Justice Roberts's Health Care Decision Disrobed: The
Microfoundations of the Supreme Court's Legitimacy. American Journal of Political Science, 59(2),
403-418.
Citizens United v. Federal Election Commission, 558 U.S. 50 (2010).
Confessore, N. and M. Thee-Brenan. (2015). Poll Shows American Favor an Overhaul of Campaign
Financing. http://www.nytimes.com/2015/06/03/us/politics/poll-shows-americans-favor-overhaul-of-
campaign-financing.html
Dowling, C. M., and A. Wichowsky. (2013). Does It Matter Who’s Behind the Curtain? Anonymity in
Political Advertising and the Effects of Campaign Finance Disclosure. American Politics Research
41(6):965-96.
Dowling, C. M., and A. Wichowsky. (2015). Attacks without Consequence? Candidates, Parties, Groups,
and the Changing Face of Negative Advertising. American Journal of Political Science, 59(1), 19-36.
Fort, K., Adda, G., and K.B. Cohen. (2011). Amazon Mechanical Turk: Gold Mine or Coal
Mine? Computational Linguistics, 37(2), 413-420.
Malbin, M. J., and T. Gais. (1998). The Day after Reform: Sobering Campaign Finance Lessons from the
American States. Albany NY: Rockefeller Institute Press.
National Institute on Money in State Politics. (2016). Scorecard: Essential Disclosure Requirements for
Contributions to State Campaigns, 2016. http://www.followthemoney.org/research/institute-
reports/scorecard-essential-disclosure-requirements-for-contributions-to-state-campaigns-2016
Peterson, R. A. (2001). On the Use of College Students in Social Science Research: Insights From a
Second-Order Meta-Analysis. Journal of Consumer Research, 28(3), 450-461.
Pew Research Center. (2015). Americans’ Internet Access: 2000-2015.
http://www.pewinternet.org/2015/06/26/americans-internet-access-2000-2015/
22
APPENDIX 1: SUPPLEMENTARY TABLES
For a list of each state’s complete scores, see http://cfinst.org/state/research/usability_stateresults.aspx
Table A1. Average Number of Submissions
State Assignment Average # of Submissions per user Alabama 2.6
Alaska 2.6
Arizona 2.8
Arkansas 2.7
California 2.8
Colorado 2.6
Connecticut 2.9
Delaware 2.8
Florida 2.7
Georgia 2.6
Hawaii 2.6
Idaho 2.4
Illinois 2.5
Indiana 2.4
Iowa 2.7
Kansas 2.5
Kentucky 3.3
Louisiana 3.5
Maine 2.8
Maryland 2.5
Massachusetts 2.3
Michigan 2.6
Minnesota 2.7
Mississippi 3.3
Missouri 2.9
Montana 2.7
Nebraska 2.4
Nevada 2.8
New Hampshire 2.8
New Jersey 2.7
New Mexico 2.7
New York 2.6
North Carolina 2.7
North Dakota 2.7
Ohio 2.5
Oklahoma 3.0
Oregon 3.0
Pennsylvania 2.6
Rhode Island 2.4
South Carolina 2.9
South Dakota 2.7
Tennessee 2.8
Texas 2.8
Utah 2.5
Vermont 2.7
Virginia 2.5
Washington 2.4
West Virginia 2.7
Wisconsin 2.6
Wyoming 2.9
23
Table A2. Comparing the Residence of Participants with the
Distribution of All US Residents
State Residence Survey Percent Census Percent
AK <1 <1
AL 1 2
AR 1 1
AZ 2 2
CA 10 12
CO 1 2
CT 1 1
DC <1 <1
DE 1 <1
FL 8 6
GA 2 3
HI <1 <1
IA 1 1
ID <1 1
IL 5 4
IN 2 2
KS 1 1
KY 2 1
LA 2 1
MA 2 2
MD 2 2
ME 1 <1
MI 3 3
MN 2 2
MO 2 2
MS 1 1
MT <1 <1
NC 3 3
ND <1 <1
NE 1 1
NH <1 <1
NJ 3 3
NM 1 1
None provided <1 n/a
NV 1 1
NY 7 6
OH 4 4
OK 1 1
OR 2 1
PA 5 4
RI <1 <1
SC 1 2
SD <1 <1
TN 2 2
TX 6 8
UT 1 1
VA 3 3
VT <1 <1
WA 3 2
WI 2 2
WV 1 1
WY <1 <1
24
Table A3. Gender Balance by State Assignment
State Assignment <Female> <Male> <Other> Alabama 52 48 0
Alaska 55 44 1
Arizona 47 53 0
Arkansas 52 48 0
California 48 52 0
Colorado 47 53 0
Connecticut 46 53 1
Delaware 47 53 0
Florida 48 52 0
Georgia 47 53 0
Hawaii 35 65 0
Idaho 64 36 0
Illinois 45 54 1
Indiana 49 51 0
Iowa 53 46 1
Kansas 51 49 0
Kentucky 47 53 0
Louisiana 43 57 0
Maine 52 48 0
Maryland 49 51 0
Massachusetts 43 57 0
Michigan 48 52 0
Minnesota 48 52 0
Mississippi 33 67 0
Missouri 40 60 0
Montana 46 54 0
Nebraska 41 59 0
Nevada 45 55 0
New Hampshire 49 51 0
New Jersey 46 53 1
New Mexico 45 55 0
New York 43 57 0
North Carolina 43 57 0
North Dakota 45 53 2
Ohio 43 57 0
Oklahoma 51 48 1
Oregon 50 50 0
Pennsylvania 52 47 1
Rhode Island 42 57 1
South Carolina 43 56 1
South Dakota 52 47 1
Tennessee 49 51 0
Texas 51 48 1
Utah 49 51 0
Vermont 50 50 0
Virginia 47 53 0
Washington 45 54 1
West Virginia 48 52 0
Wisconsin 46 53 1
Wyoming 44 56 0
25
Table A4. Age Balance by State Assignment
State Assignment 18 to 24 25 to 34 35 to 49 50 or older Alabama 9 44 32 15
Alaska 13 46 29 11
Arizona 14 55 19 13
Arkansas 19 40 28 13
California 18 50 24 8
Colorado 16 46 28 10
Connecticut 11 47 32 11
Delaware 10 60 21 9
Florida 15 47 28 10
Georgia 18 42 26 14
Hawaii 13 47 27 12
Idaho 14 50 28 8
Illinois 17 45 29 9
Indiana 16 47 26 11
Iowa 11 50 30 9
Kansas 11 44 36 8
Kentucky 12 59 21 8
Louisiana 11 55 29 5
Maine 11 51 29 9
Maryland 14 40 33 13
Massachusetts 11 47 31 10
Michigan 9 43 38 10
Minnesota 10 49 27 15
Mississippi 16 53 22 9
Missouri 14 41 33 13
Montana 16 41 32 10
Nebraska 17 43 28 13
Nevada 11 47 32 9
New Hampshire 6 43 38 13
New Jersey 11 53 33 3
New Mexico 14 44 32 10
New York 12 49 26 13
North Carolina 14 40 36 11
North Dakota 12 43 36 10
Ohio 21 52 19 8
Oklahoma 10 56 20 14
Oregon 18 47 25 10
Pennsylvania 18 37 29 16
Rhode Island 18 49 25 9
South Carolina 18 52 18 12
South Dakota 10 55 28 7
Tennessee 11 44 29 16
Texas 13 51 23 13
Utah 14 41 35 10
Vermont 17 52 22 9
Virginia 12 54 23 11
Washington 16 49 25 10
West Virginia 14 42 30 13
Wisconsin 13 48 34 5
Wyoming 20 46 28 7
26
Table A5. Educational Balance by State Assignment
State assignment Less than college (%)
Some college, or
non-graduate
degree (%) Graduate degree (%)
Alabama 14 77 9
Alaska 13 76 11
Arizona 11 76 13
Arkansas 16 71 13
California 9 78 13
Colorado 12 74 14
Connecticut 11 77 12
Delaware 12 79 9
Florida 7 79 14
Georgia 12 82 6
Hawaii 10 77 13
Idaho 6 81 13
Illinois 5 87 8
Indiana 6 84 10
Iowa 9 75 16
Kansas 8 80 12
Kentucky 12 82 6
Louisiana 13 76 11
Maine 10 79 11
Maryland 13 73 13
Massachusetts 10 78 12
Michigan 9 74 17
Minnesota 12 73 15
Mississippi 16 74 10
Missouri 6 80 14
Montana 15 79 6
Nebraska 9 83 8
Nevada 16 73 11
New Hampshire 11 77 12
New Jersey 10 78 12
New Mexico 14 78 8
New York 8 85 7
North Carolina 17 70 13
North Dakota 11 75 14
Ohio 9 82 9
Oklahoma 12 75 13
Oregon 15 71 14
Pennsylvania 15 73 12
Rhode Island 9 76 15
South Carolina 11 79 10
South Dakota 19 66 15
Tennessee 9 78 13
Texas 12 75 13
Utah 17 76 7
Vermont 16 70 14
Virginia 14 74 12
Washington 10 77 13
West Virginia 11 81 8
Wisconsin 11 79 10
Wyoming 16 71 13
27
Table A6. How long would you estimate it took you to complete the tasks using the state disclosure
website? (Sorted by the most who gave up after ten minutes.)
State assignment
0-2
mins.
3-5
mins. 6-8 mins.
9-10
mins.
10+ mins. (or
stopped after
10 mins.)
% Who Took > 10
minutes or
stopped Total (n)
North Carolina 0 11 14 14 62 61% 101
Maryland 2 11 11 19 60 58% 103
Texas 1 7 18 16 58 58% 100
New Hampshire 0 15 18 13 54 54% 100
Wisconsin 3 11 15 22 49 49% 100
Montana 4 11 19 17 48 48% 99
Delaware 1 11 24 17 47 47% 100
Colorado 1 15 22 16 46 46% 100
Ohio 1 10 23 20 46 46% 100
Connecticut 0 8 24 23 46 46% 101
Oklahoma 1 11 24 21 42 42% 99
Michigan 0 12 28 19 41 41% 100
Louisiana 1 15 24 21 39 39% 100
Oregon 0 12 27 23 38 38% 100
Rhode Island 2 13 30 18 38 38% 101
Alaska 3 8 34 17 37 37% 99
Hawaii 1 16 33 12 37 37% 99
Illinois 3 16 27 17 37 37% 100
South Dakota 3 10 33 19 35 35% 100
Indiana 2 11 31 23 35 34% 102
Vermont 4 14 25 23 34 34% 100
Tennessee 2 9 26 30 33 33% 100
New York 1 19 33 17 32 31% 102
Mississippi 1 14 28 25 31 31% 99
Nebraska 3 17 25 25 31 31% 101
Missouri 1 19 31 20 30 30% 101
Pennsylvania 0 21 31 19 29 29% 100
Maine 1 20 27 24 28 28% 100
North Dakota 3 23 24 24 27 27% 101
South Carolina 3 20 32 19 26 26% 100
Iowa 1 19 36 19 26 26% 101
Virginia 3 28 29 15 26 26% 101
Idaho 3 17 34 21 25 25% 100
New Jersey 0 16 33 26 25 25% 100
Arkansas 3 18 36 19 24 24% 100
Kentucky 2 26 32 16 24 24% 100
California 4 12 40 21 23 23% 100
Massachusetts 3 21 34 19 22 22% 99
Arizona 1 27 37 15 22 22% 102
Nevada 1 19 44 14 21 21% 99
Georgia 0 25 38 16 21 21% 100
Kansas 0 22 32 25 21 21% 100
New Mexico 3 33 30 16 18 18% 100
Wyoming 7 22 30 24 18 18% 101
Florida 4 28 37 17 18 17% 104
Utah 0 20 34 29 17 17% 100
Alabama 3 31 35 15 16 16% 100
West Virginia 3 23 38 24 11 11% 99
Minnesota 4 37 36 13 11 11% 101
Washington 4 35 38 13 10 10% 100
Total 97 889 1464 970 1595 32% 5015
28
Table A6a. How long would you estimate it took you to complete the tasks
using the state disclosure website?
0-2 mins 3-5 mins 6-8 mins 9-10 mins 10+ Total (n) Mean
STATE
Score
(1*n)
Score
(4*n)
Score
(7*n)
Score
(9*n)
Score (14 minutes
assumed if stopped) n Score (Mean)
Washington 4 140 266 117 140 100 6.67
Minnesota 4 148 252 117 154 101 6.68
Alabama 3 124 245 135 224 100 7.31
West Virginia 3 92 266 216 154 99 7.38
New Mexico 3 132 210 144 252 100 7.41
Florida 4 112 259 153 252 104 7.50
Wyoming 7 88 210 216 252 101 7.65
Arizona 1 108 259 135 308 102 7.95
Georgia 0 100 266 144 294 100 8.04
Virginia 3 112 203 135 364 101 8.09
Kentucky 2 104 224 144 336 100 8.10
Massachusetts 3 84 238 171 308 99 8.12
Nevada 1 76 308 126 294 99 8.13
Utah 0 80 238 261 238 100 8.17
Kansas 0 88 224 225 294 100 8.31
Arkansas 3 72 252 171 336 100 8.34
South Carolina 3 80 224 171 364 100 8.42
California 4 48 280 189 322 100 8.43
Idaho 3 68 238 189 350 100 8.48
North Dakota 3 92 168 216 378 101 8.49
Iowa 1 76 252 171 364 101 8.55
Maine 1 80 189 216 392 100 8.78
Pennsylvania 0 84 217 171 406 100 8.78
New Jersey 0 64 231 234 350 100 8.79
Missouri 1 76 217 180 420 101 8.85
New York 1 76 231 153 448 102 8.91
Nebraska 3 68 175 225 434 101 8.96
Vermont 4 56 175 207 476 100 9.18
Mississippi 1 56 196 225 434 99 9.21
Illinois 3 64 189 153 518 100 9.27
Hawaii 1 64 231 108 518 99 9.31
South Dakota 3 40 231 171 490 100 9.35
Indiana 2 44 217 207 490 102 9.41
Rhode Island 2 52 210 162 532 101 9.49
Tennessee 2 36 182 270 462 100 9.52
Alaska 3 32 238 153 518 99 9.54
Louisiana 1 60 168 189 546 100 9.64
Oregon 0 48 189 207 532 100 9.76
Michigan 0 48 196 171 574 100 9.89
Oklahoma 1 44 168 189 588 99 10.00
Colorado 1 60 154 144 644 100 10.03
Montana 4 44 133 153 672 99 10.16
Delaware 1 44 168 153 658 100 10.24
Ohio 1 40 161 180 644 100 10.26
Wisconsin 3 44 105 198 686 100 10.36
Connecticut 0 32 168 207 644 101 10.41
New Hampshire 0 60 126 117 756 100 10.59
Maryland 2 44 77 171 840 103 11.01
Texas 1 28 126 144 812 100 11.11
North Carolina 0 44 98 126 868 101 11.25
Total 97 3556 10,248 8739 22,344 5,017 8.97
29
Table A7. How confident are you in the answers provided in the preceding tasks?
Not at all (1) Not very (2) Somewhat (3) Very (4) Total (n) Mean
STATE
Weighted
Score (1 times
n)
Weighted
Score (2 times
n)
Weighted Score
(3 times n)
Weighted Score
(4 times n) n
Weighted
Score (Mean)
New Mexico 1 4 132 212 100 3.49
Minnesota 1 16 99 236 101 3.49
Washington 0 6 150 188 100 3.44
Florida 5 16 123 200 104 3.31
Arkansas 5 18 132 168 100 3.23
Georgia 3 18 156 144 100 3.21
West Virginia 1 24 156 136 99 3.20
Arizona 4 26 132 164 102 3.20
Nevada 5 26 126 156 99 3.16
Kentucky 3 24 156 132 100 3.15
Alabama 5 30 123 156 100 3.14
South Carolina 2 38 150 116 100 3.06
Iowa 10 18 147 132 101 3.04
Massachusetts 4 32 156 108 99 3.03
Utah 13 18 120 152 100 3.03
Wyoming 11 18 141 136 101 3.03
Kansas 8 32 129 132 100 3.01
Mississippi 8 30 132 128 99 3.01
Hawaii 7 20 174 96 99 3.00
Idaho 9 24 147 120 100 3.00
South Dakota 7 26 159 108 100 3.00
Maine 9 32 126 132 100 2.99
Pennsylvania 9 20 162 108 100 2.99
New York 8 26 165 104 102 2.97
North Dakota 10 30 135 124 101 2.96
Tennessee 8 36 132 120 100 2.96
Indiana 8 34 150 108 102 2.94
Missouri 9 36 132 120 101 2.94
Nebraska 9 38 126 124 101 2.94
California 7 24 183 80 100 2.94
Colorado 8 48 102 136 100 2.94
New Jersey 5 34 171 84 100 2.94
Virginia 7 48 132 104 101 2.88
Louisiana 12 30 144 100 100 2.86
Illinois 14 40 102 128 100 2.84
Oregon 19 22 117 124 100 2.82
Rhode Island 11 38 150 84 101 2.80
Michigan 16 40 123 92 100 2.71
Oklahoma 11 54 132 68 99 2.68
Vermont 21 42 87 116 100 2.66
Montana 23 36 84 120 99 2.66
Maryland 21 46 90 116 103 2.65
Delaware 17 52 102 92 100 2.63
Ohio 13 54 132 64 100 2.63
New Hampshire 22 44 93 100 100 2.59
Wisconsin 20 48 105 84 100 2.57
Connecticut 21 46 111 80 101 2.55
Alaska 19 48 123 60 99 2.53
North Carolina 26 34 120 72 101 2.50
Texas 24 52 87 84 100 2.47
Total 519 1598 6,561 6,048 5,017 2.94
30
Table A8. Was the terminology easy to understand or was it confusing?
Very confusing
(1)
Somewhat
confusing( 2) Somewhat easy (3) Very easy (4) Total (n) Mean
STATE
Weighted
Score (1 times
n)
Weighted
Score (2 times
n)
Weighted Score (3
times n)
Weighted
Score (4 times
n) n
Weighted Score
(Mean)
Minnesota 3 14 87 248 101 3.49
New Mexico 3 24 114 188 100 3.29
Florida 2 40 132 152 104 3.13
Washington 2 36 144 128 100 3.10
Alabama 6 38 111 152 100 3.07
Georgia 4 42 141 112 100 2.99
South Carolina 3 52 129 112 100 2.96
West Virginia 6 42 129 116 99 2.96
Arizona 3 56 141 96 102 2.90
Idaho 5 50 147 84 100 2.86
Arkansas 13 46 99 124 100 2.82
Kentucky 4 66 123 88 100 2.81
Nevada 10 52 117 96 275 2.78
Virginia 9 62 108 100 101 2.76
Wyoming 13 50 117 96 101 2.73
Utah 15 40 129 88 100 2.72
North Dakota 15 42 129 88 101 2.71
New York 12 52 132 80 102 2.71
Hawaii 9 56 138 64 99 2.70
Massachusetts 11 56 120 80 99 2.70
California 9 58 141 60 100 2.68
Maine 18 44 120 80 100 2.62
New Jersey 14 54 126 68 100 2.62
South Dakota 6 78 126 52 100 2.62
Mississippi 14 56 129 56 99 2.58
Kansas 13 70 108 64 255 2.55
Pennsylvania 16 60 111 68 100 2.55
Tennessee 19 66 75 92 100 2.52
Iowa 22 48 114 68 101 2.50
Oregon 17 66 102 64 100 2.49
Illinois 21 70 66 88 100 2.45
Louisiana 21 64 84 76 100 2.45
Missouri 20 62 105 60 247 2.45
Nebraska 24 54 93 76 101 2.45
Indiana 20 66 105 56 102 2.42
Vermont 26 52 90 72 100 2.40
Rhode Island 22 74 84 56 101 2.34
Oklahoma 23 76 78 48 99 2.27
Colorado 26 72 69 60 100 2.27
Michigan 31 62 60 72 100 2.25
Montana 32 56 69 64 99 2.23
Alaska 27 76 60 56 99 2.21
Ohio 30 58 93 40 100 2.21
Delaware 36 60 57 60 100 2.13
New Hampshire 39 58 54 56 100 2.07
Connecticut 39 60 72 32 101 2.01
Maryland 39 74 45 48 103 2.00
Wisconsin 46 56 51 36 100 1.89
North Carolina 44 70 51 20 101 1.83
Texas 49 56 51 24 100 1.80
Total 911 2,794 5,079 4064 5,017 2.56
31
Table A9. How likely would you be to use this disclosure website if you were seeking information
on money in this state's politics in the future?
Not at all (1) Not very (2) Somewhat (3) Very (4) Total (n) Mean
STATE (1 times n) (2 times n) (3 times n) (4 times n) n Mean Score
Minnesota 8 16 78 236 101 3.35
Washington 7 24 114 172 100 3.17
Alabama 10 30 105 160 100 3.05
New Mexico 8 38 117 136 100 2.99
Florida 12 32 123 140 104 2.95
West Virginia 6 56 123 96 99 2.84
Georgia 11 48 126 92 100 2.77
South Carolina 11 54 117 92 100 2.74
Massachusetts 13 54 96 108 99 2.74
Arizona 15 46 114 104 102 2.74
Nevada 18 36 108 108 99 2.73
Kentucky 13 50 120 88 100 2.71
California 17 48 114 84 100 2.63
Virginia 18 58 81 108 101 2.62
Hawaii 15 58 114 68 99 2.58
Arkansas 19 52 105 80 100 2.56
Kansas 20 54 93 88 100 2.55
New Jersey 18 50 126 60 100 2.54
New York 21 58 87 92 102 2.53
Wyoming 28 30 108 88 101 2.51
Tennessee 20 60 87 84 100 2.51
Pennsylvania 24 50 102 68 100 2.44
Idaho 21 62 99 60 100 2.42
Mississippi 26 52 84 76 99 2.40
Utah 31 38 90 80 100 2.39
North Dakota 25 52 114 48 101 2.37
Maine 24 62 96 52 100 2.34
Louisiana 27 56 90 60 100 2.33
Indiana 29 54 96 56 102 2.30
South Dakota 21 80 84 44 100 2.29
Oklahoma 21 80 87 36 99 2.26
Oregon 31 50 99 44 100 2.24
Iowa 37 44 72 72 101 2.23
Rhode Island 30 62 87 44 101 2.21
Illinois 35 54 63 68 100 2.20
Nebraska 29 66 87 40 101 2.20
Ohio 37 50 78 48 100 2.13
Michigan 38 48 81 44 100 2.11
Alaska 36 62 63 44 99 2.07
Missouri 36 68 57 48 101 2.07
Connecticut 40 54 75 36 101 2.03
Colorado 40 58 69 32 100 1.99
Delaware 47 48 63 32 100 1.90
Vermont 48 48 60 32 100 1.88
Montana 43 72 45 20 99 1.82
North Carolina 53 54 30 44 101 1.79
Wisconsin 51 62 39 20 100 1.72
Maryland 58 48 42 28 103 1.71
Texas 52 70 27 16 100 1.65
N. Hampshire 59 56 27 16 100 1.58
Total 1,357 2,614 4,365 3592 5,017 2.38
32
Table A10. How would you rate your overall experience on the disclosure website on a scale of 1
to 5, with 1 being awful and 5 being excellent?
[1] [2] [3] [4] [5] Total (n) Mean
STATE (1 times n) (2 times n) (3 times n) (4 times n) (5 times n) Total n Mean Score
Minnesota 6 10 36 124 235 101 4.07
Washington 2 18 63 188 105 100 3.76
New Mexico 5 24 75 164 85 100 3.53
Alabama 8 32 54 132 125 100 3.51
Florida 9 32 63 172 75 104 3.38
West Virginia 8 28 93 136 60 99 3.28
Georgia 7 28 120 112 55 100 3.22
Massachusetts 5 46 96 116 50 99 3.16
South Carolina 9 38 96 120 50 100 3.13
Nevada 17 24 63 168 35 99 3.10
Kentucky 7 42 102 128 30 100 3.09
Arizona 14 30 102 116 50 102 3.06
Hawaii 10 40 96 124 30 99 3.03
Virginia 11 58 81 100 45 101 2.92
New York 12 56 84 108 35 102 2.89
Arkansas 17 46 69 120 35 100 2.87
California 16 38 99 104 30 100 2.87
Wyoming 25 30 60 128 45 101 2.85
New Jersey 12 54 96 92 30 100 2.84
North Dakota 20 34 93 108 30 101 2.82
Idaho 17 34 114 100 15 100 2.80
Utah 21 34 90 104 30 100 2.79
Kansas 15 60 72 104 25 100 2.76
Maine 20 40 96 88 30 100 2.74
Pennsylvania 18 50 87 84 35 100 2.74
Mississippi 20 34 117 76 20 99 2.70
Tennessee 22 44 87 84 30 100 2.67
Louisiana 17 58 90 80 20 100 2.65
South Dakota 16 56 102 76 15 100 2.65
Indiana 24 44 102 56 40 102 2.61
Oklahoma 20 62 84 64 20 99 2.53
Oregon 26 46 81 92 5 100 2.50
Illinois 27 58 60 72 30 100 2.47
Rhode Island 23 70 66 68 20 101 2.45
Iowa 37 40 57 64 45 101 2.41
Nebraska 26 66 69 56 25 101 2.40
Alaska 28 66 60 48 30 99 2.34
Michigan 34 46 75 44 35 100 2.34
Missouri 31 58 69 52 25 101 2.33
Colorado 31 52 81 52 15 100 2.31
Connecticut 34 50 69 68 10 101 2.29
Ohio 38 46 66 44 30 100 2.24
Delaware 38 64 60 20 25 100 2.07
Vermont 41 56 57 32 20 100 2.06
Montana 43 56 42 52 5 99 2.00
Maryland 46 56 66 24 5 103 1.91
Wisconsin 50 50 60 12 10 100 1.82
Texas 53 48 39 36 5 100 1.81
North Carolina 53 56 27 40 5 101 1.79
N. Hampshire 58 38 51 20 5 100 1.72
Total 1,147 2,248 3,840 4,372 1865 5,017 2.69
33
Table A11. OVERALL USER TASK ACCURACY BY STATE
State assigned Avg. Percent Correct Avg. # of tasks completed n Washington 72 5.7 100
Pennsylvania 69 5.3 100
Arizona 68 5.2 102
Georgia 67 5.5 100
Minnesota 66 5.6 101
Arkansas 66 5.1 100
New Mexico 64 5.4 100
Massachusetts 62 5.5 99
Virginia 62 5.3 101
Florida 61 5.7 104
Nebraska 61 5.0 101
Iowa 59 5.3 101
South Carolina 59 5.4 100
Kansas 58 5.3 100
Kentucky 58 5.2 100
Tennessee 58 5.4 100
Wyoming 58 5.1 101
West Virginia 58 5.3 99
Maine 57 5.3 100
Oregon 57 4.8 100
Louisiana 56 5.2 100
Nevada 56 5.4 99
Utah 56 4.9 100
Alabama 55 5.5 100
Idaho 54 5.3 100
New Jersey 54 5.3 100
Illinois 54 5.0 100
Indiana 53 5.0 102
Rhode Island 52 4.9 101
California 52 5.1 100
Hawaii 52 4.9 99
Alaska 52 4.9 99
South Dakota 52 5.2 100
Maryland 51 4.5 103
New York 50 5.3 102
Michigan 50 4.9 100
North Carolina 50 4.5 101
Oklahoma 50 4.7 99
Colorado 49 4.7 100
Connecticut 49 4.8 101
Montana 48 5.0 99
Texas 48 4.5 100
Ohio 47 5.0 100
New Hampshire 45 4.3 100
Delaware 44 4.7 100
Wisconsin 44 4.5 100
North Dakota 43 5.1 101
Mississippi 41 4.8 99
Vermont 40 4.3 100
Missouri 40 4.6 101
Total 55 5.1 5015
34
Appendix 2: Full Survey
35
36
37
38
39
40
41
42
Appendix 3
State Link Alabama http://www.sos.state.al.us/vb/inquiry/inquiry.aspx?area=Campaign%20Finance
Alaska https://aws.state.ak.us/ApocReports/Campaign/
Arizona http://www.azsos.gov/elections/campaign-finance-reporting
Arkansas http://www.sos.arkansas.gov/filing_search/index.php/filing/search/new
California http://cal-access.sos.ca.gov/Campaign/
Colorado http://tracer.sos.colorado.gov/PublicSite/Search.aspx
Connecticut http://seec.ct.gov/eCrisHome/eCRIS_Search/eCrisSearchHome
Delaware https://cfrs.elections.delaware.gov/
Florida http://dos.elections.myflorida.com/campaign-finance/contributions/
Georgia http://media.ethics.ga.gov/Search/Campaign/Campaign_ByName.aspx
Hawaii http://ags.hawaii.gov/campaign/
Idaho http://www.sos.idaho.gov/eid/index.html
Illinois https://www.elections.il.gov/InfoForVoters.aspx
Indiana https://campaignfinance.in.gov/PublicSite/Homepage.aspx
Iowa https://webapp.iecdb.iowa.gov/publicview/NewContributionSearch.aspx
Kansas http://www.kansas.gov/ethics/Campaign_Finance/Campaign_Contributor_Data/index.html
Kentucky http://www.kref.state.ky.us/krefsearch/
Louisiana http://ethics.la.gov/EthicsViewReports.aspx?Reports=CampaignFinance
Maine http://mainecampaignfinance.com/PublicSite/homepage.aspx
Maryland https://campaignfinancemd.us/Home/Disclosures
Massachusetts http://www.ocpf.us/#data
Michigan http://miboecfr.nictusa.com/cgi-bin/cfr/mi_com.cgi
Minnesota http://reports.cfb.mn.gov/dataViewer/cfbsearch.php
Mississippi http://www.sos.ms.gov/Elections-Voting/Pages/Campaign-Finance-Search.aspx
Missouri http://www.mec.mo.gov/MEC/Campaign_Finance/Searches.aspx
Montana http://campaignreport.mt.gov/
Nebraska http://www.nadc.nebraska.gov/cf/campaign_filings.html
Nevada http://nvsos.gov/SOSCandidateServices/AnonymousAccess/CEFDSearchUU/Search.aspx#individual_search
New Hampshire http://sos.nh.gov/CampFin.aspx
New Jersey http://www.elec.state.nj.us/publicinformation/searchdatabase.htm
New Mexico https://www.cfis.state.nm.us/media/
New York http://www.elections.ny.gov/CFViewReports.html
North Carolina https://ncsbe.azurewebsites.net/Campaign-Finance/report-search
North Dakota https://vip.sos.nd.gov/PortalListDetails.aspx?ptlhPKID=116&ptlPKID=2#content-start
Ohio http://www.sos.state.oh.us/SOS/CampaignFinance/Search.aspx
Oklahoma https://www.ok.gov/ethics/public/index.php
Oregon http://sos.oregon.gov/elections/Pages/orestar.aspx
Pennsylvania https://www.campaignfinanceonline.state.pa.us/Pages/CampaignFinanceHome.aspx
Rhode Island http://www.elections.state.ri.us/finance/publicinfo/
South Carolina http://apps.sc.gov/PublicReporting/Index.aspx?AspxAutoDetectCookieSupport=1
South Dakota https://sdsos.gov/elections-voting/campaign-finance/Search.aspx
Tennessee https://apps.tn.gov/tncamp-app/public/search.htm
Texas https://www.ethics.state.tx.us/main/search.htm
Utah http://disclosures.utah.gov/
Vermont https://campaignfinance.sec.state.vt.us/
Virginia http://cfreports.sbe.virginia.gov/
Washington https://www.pdc.wa.gov/browse
West Virginia http://www.sos.wv.gov/elections/campaignfinance/Pages/default.aspx
Wisconsin http://cfis.wi.gov/Public/Registration.aspx?page=FiledReports
Wyoming https://www.wycampaignfinance.gov/
Website Links functional as of 5/17/2016
Visit CFI’s website at
www.CFInst.org to
read all of our reports
and analyses of money
in politics at the
federal, state and local
levels.
Follow CFI on Twitter at
@CFInst.org