Upload
ursula-chapman
View
215
Download
0
Tags:
Embed Size (px)
Citation preview
Balancing Research & Privacy
E. C. HedbergArizona State University
& NORC at the University of Chicago
Today
• Topics for conversation– Why is external research important– Data products typically produced– What is disclosure?– Methods to avoid disclosure (aggregate tables)– Methods to avoid disclosure (individual level data)
• Research results about a common disclosure method
• Pilot work on creation of synthetic data
Access
Current state of researcher access:• It is very hard to get
access to SLDS data• In the 8 years of
working in this area, I have had access to 16 states
• Very rare to have this level of access
Privacy
• Researchers, Educators, and Parents, are increasingly concerned about what student data elements are recorded and who has access to them
• FERPA regulations set a high bar for the release of information– Must remove personally identifiable information (PII)
• But what constitutes PII?
– Privacy Technical Assistance Center (PTAC) offers some advice
FERPA Section 34 CFR § 99.31(b)(1)
Search “student data” in news
PTAC Advice
Access
• There area a wide variety of interpretations to FERPA– Some states allow data use through the audit and
evaluation exception – Some states don’t allow researchers access at all
WHO CARES ABOUT RESEARCH?
Research
A valid question is: “why allow research with state longitudinal data systems (SLDS) at all?”
Research
• Premise: Good policy is based on the best available evidence as to:1. Facts on the ground2. The mechanisms of achievement3. The results of previous policies
• If we want to enact good policy, we need to (at least) know these three things
Research
• SLDS data provide a good source, and sometimes the only source, of evidence to support positions
• The budgets for national surveys of education achievement are declining– Research about current mechanisms is more difficult
• States such as Arizona usually make up a small portion of those surveys due to sampling plans
Research
• The only way to evaluate facts on the ground in Arizona is with Arizona data
• The only way to evaluate Arizona policies is with Arizona data
• Data from a sample of districts is not necessarily representative.
• A complex, representative, sample can be just as expensive as the SLDS
Research
• Finally, there is return on investment• Nationally, over 600 million federal dollars
have been invested in SLDS• Who is going to analyze all this data?• Much of it can be analyzed by the state…• … but it is also efficient and prudent to partner
with trained researchers.
Research Ecosystem
• Arizona SLDS provides a key resource to support policy investigation to improve education for Arizona residents
• Arizona can partner with ASU and UofA researchers
• Researchers, in turn, get credit for their work, get tenure, and provide return on investment for Arizona
Research Ecosystem
• However, this ecosystem is based on a risky exchange of information
• Private data, protected by FERPA, is the key resource.
• The safest thing to do is to not collect it: but that cripples the ability of Arizona to use evidence to support policies
Key Question
How can we balance research and privacy?
Types of data products
• The research ecosystem is supported by several types of data products
Types of Data Products
• Aggregated Tables• Individual level data for research• Also:• Research centers (e.g., Texas;
http://www.utaustinerc.org)• Web based interfaces to analyze data on a server
(e.g. Rhode Island; http://ridatahub.org)
Disclosure Risk
• So, what are we worried about?• We are concerned that an “intruder” will be
able to identify individuals and obtain sensitive information (score, income level, etc.) about them.– Identification through the use of published tables– Identification through access of individual level
data
Disclosure Risk
• In survey research, the bar is if someone knows that a person is in the sample, can they be identified.
• In administrative data, since (almost) all are in the data, the bar is far lower for risk
AGGREGATE TABLES
Aggregate Tables
• Descriptive tables indicating counts or other statistics broken down by other nominal characteristics
• Each table needs to balance disclosure risk with data utility
Example
• Random sample from ECLS• Reading level by poverty by gender…by race
Problem?
Problem?
Problem?
Conceptual DiagramTaken from
Duncan, G. T., Fienberg, S. E., Krishnan, R., Padman, R., & Roehrig, S. F. (2001). Disclosure limitation methods and information loss for tabular data. Confidentiality, Disclosure and Data Access: Theory and Practical Applications for Statistical Agencies, 135-166.
Options
• One option is to enact cell suppression– If a cell in a table is based on n or less
observations, the cell is suppressed• This is easy to implement, but has problems– It is often possible to reproduce the cell count
using other cells and marginal totals– Enacting complementary suppression to avoid
such tactics is often complicated and removes more data
Alternatives to Cell Suppression
• Rounding– All cells in a table are rounded to mask true values– Problems:• Can destroy even more information than cell
suppression• Hard to define the rounding rules• Tables may be inconsistent
Overall
• Data products such as aggregate tables should be vetted by a specialist data auditor– Pre-specified level of risk discussed– Procedures such as linear programming are used
to analyze cells to quantify risk.– Problems:• Is is an expensive position or service
MICRO-DATA FILES
Rounding, Perturbing
• One option is to limit small cells by rounding covariates to larger units so that large tables that identify individuals is not possible– Problems:• Destroys data• May limit analyses
Micro-aggregation
• Individuals are grouped based on nominal groups or through a cluster analysis
• Mean scores are assigned to each group• Groups are analyzed using weights• See, e.g.,
– Sande, G. (2001). Methods for Data Directed Microaggregation in One Dimension. Proceedings of New Techniques and Technologies for Statistics/Exchange of Technology and Know-how, 18-22.
– Domingo-Ferrer, J., & Mateo-Sanz, J. M. (2002). Practical data-oriented microaggregation for statistical disclosure control. Knowledge and Data Engineering, IEEE Transactions on, 14(1), 189-201.
CONSEQUENCES OF REDACTION
Data Redaction
• One common safeguard to securing privacy is to redact data of unique individuals
• This strategy is harmful to the analysis, however
Data Redaction
• Common practice is to redact “small cells” from data before giving it to researchers
• For each demographic combination within a district:school:grade: – if 5 or less students have that combination (gender,
disability status, race/ethnicity, English learner, poverty status) test scores removed from data
• This presents major problems for even basic analyses
Data Redaction Test
• 6 States agreed to participate in a study about consequences of data redaction– Names withheld for presentation
• Original data provided that was not redacted• Analyses performed using original data• Redaction rules applied• Reanalysis and comparison of results• Math and Reading, grades 3-8 analyzed
5th Graders
5th grade redaction rates
5th grade redaction rates
• Redaction process can remove up to 35 percent of the data!
• For minority groups, much of the data can be removed.
Data redaction Consequences
• Mean differences are exaggerated• Intraclass correlations increase• The cause is the removal of heterogeneous
schools
Bias in mean differences
Bias in mean differences
Bias in mean differences
Group Correlation
Black 0.45
Hispanic 0.50
Poor 0.65
The level of bias in the mean estimate from the redacted sample is positively correlated with the rate of redaction of that particular groupUnit of analysis: state-subject-grade combinations
Bias is related to the level of redaction
Bias is related to the level of redaction
Bias in design parameters
Alternatives to Data Redaction
• Hedges and Hedberg have three active grants looking at alternative methodologies to data redaction– Spencer Foundation Pilot grant– IES Methodology grant– NSF Education and Human Resources grant
• The spencer grant is completing now, IES and NSF are in data gathering stages
Pilot test of synthetic data
• Data from the State of Arkansas, 2010• Examine 5th grade literacy scores• Use data with pretests for 4th and 3rd grade.
Pilot test of synthetic data
• Micro-data with sensitive columns (i.e., test scores)
• Replace sensitive columns with synthetic data that preserves the variation and co-variation with covariates
• Uses a model based approach similar to imputation to produce synthetic test scores
Two different tries
Simple model• Race, gender and teacher
effects• Fast to implement
Complex model• Race, gender, teacher,
district effects• Pretests• Race by teacher and district
effects• Gender by teacher and
district effects
Results of Pilot
Results of Pilot
• Simple model based synthetic data estimates the mean
Results of Pilot
Results of Pilot
• Simple model based synthetic data doesn’t do so well on the variance: gross underestimation
Results of Pilot
Results of Pilot
• Complex model based synthetic data does OK on estimating the mean
Results of Pilot
Results of Pilot
• But the complex model based synthetic data over-estimates the variance
PILOT TEST ON MEAN DIFFERENCES
Results of Pilot
Results of Pilot
Results of Pilot
• Simple model based synthetic data underestimates the standard error of the Black/White Difference
Results of Pilot
Results of Pilot
Results of Pilot
• Complex model based synthetic data overestimates the standard error
Pilot test of synthetic data
• These are not the only options for models • Also, there are some technical details about
the simulation procedures that we are glossing over; we have more options here as well
Alternatives to Data Redaction
• We are examining two other alternatives to data redaction– Masking, perturbing, and coarsening the data– NORC’s X-ID system of micro grouping (micro-
aggregation; http://xid.norc.org)
NORC XID
Thank you!
E. C. [email protected]@norc.org773 909 6801