26
http://smr.sagepub.com/ Sociological Methods & Research http://smr.sagepub.com/content/31/3/364 The online version of this article can be found at: DOI: 10.1177/0049124102239080 2003 31: 364 Sociological Methods & Research Douglas A. Smith and Robert Brame Tobit Models in Social Science Research : Some Limitations and a More General Alternative Published by: http://www.sagepublications.com can be found at: Sociological Methods & Research Additional services and information for http://smr.sagepub.com/cgi/alerts Email Alerts: http://smr.sagepub.com/subscriptions Subscriptions: http://www.sagepub.com/journalsReprints.nav Reprints: http://www.sagepub.com/journalsPermissions.nav Permissions: http://smr.sagepub.com/content/31/3/364.refs.html Citations: at ARIZONA STATE UNIV on July 16, 2010 smr.sagepub.com Downloaded from

Sociological Methods & Research - Arizona State Universitygasweete/crj604/readings/2003-Smith... · 2010. 7. 16. · 366 SOCIOLOGICAL METHODS & RESEARCH public safety problems such

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Sociological Methods & Research - Arizona State Universitygasweete/crj604/readings/2003-Smith... · 2010. 7. 16. · 366 SOCIOLOGICAL METHODS & RESEARCH public safety problems such

http://smr.sagepub.com/ 

Sociological Methods & Research

http://smr.sagepub.com/content/31/3/364The online version of this article can be found at:

 DOI: 10.1177/0049124102239080

2003 31: 364Sociological Methods & ResearchDouglas A. Smith and Robert Brame

Tobit Models in Social Science Research : Some Limitations and a More General Alternative  

Published by:

http://www.sagepublications.com

can be found at:Sociological Methods & ResearchAdditional services and information for     

http://smr.sagepub.com/cgi/alertsEmail Alerts:  

http://smr.sagepub.com/subscriptionsSubscriptions:  

http://www.sagepub.com/journalsReprints.navReprints:  

http://www.sagepub.com/journalsPermissions.navPermissions:  

http://smr.sagepub.com/content/31/3/364.refs.htmlCitations:  

at ARIZONA STATE UNIV on July 16, 2010smr.sagepub.comDownloaded from

Page 2: Sociological Methods & Research - Arizona State Universitygasweete/crj604/readings/2003-Smith... · 2010. 7. 16. · 366 SOCIOLOGICAL METHODS & RESEARCH public safety problems such

Tobit Models in Social Science Research

Some Limitations and a More General Alternative

DOUGLAS A. SMITH

ROBERT BRAMEUniversity of South Carolina

University of Maryland

The use of tobit models to study censored and limited dependent variables has becomeincreasingly common in applied social science research over the past two decades.Importantly, the likelihood function for a tobit model involves two distinct components:(1) the process that determines whether the outcome variable is fully observed or not and(2) the process that determines the score on the dependent variable for individuals whoseoutcome is fully observed. One limitation of the tobit model is its assumption that theprocesses in both regimes of the outcome are equal up to a constant of proportionality. Inthis article, the authors use Monte Carlo simulation evidence and an empirical exampleto illustrate the restrictive nature of this assumption and the consequences of dispropor-tionality for the tobit model. They conclude that an alternative model proposed by Craggshould replace the tobit model as the estimator of first resort in situations such as thoseconsidered here.

Keywords: tobit model; Monte Carlo simulation study; Cragg model.

S ocial scientists and public policy researchers often confrontthe task of developing inferences from samples where the

dependent variable of interest is only partially observed. For example,there is a rich literature on the correlates of sentence severity (see,e.g., Schmidt and Witte 1984, chap. 5). Sentence severity is oftendefined as the length of time incarcerated, which is easy to observefor convicted offenders who actually are incarcerated. For thoseoffenders who are convicted and not incarcerated, however, sen-tence severity is frequently fixed at zero despite variation in otherkinds of sentences such as probation or fines. If our substantive

SOCIOLOGICAL METHODS & RESEARCH, Vol. 31, No. 3, February 2003 364-388DOI: 10.1177/0049124102239080© 2003 Sage Publications

364

at ARIZONA STATE UNIV on July 16, 2010smr.sagepub.comDownloaded from

Page 3: Sociological Methods & Research - Arizona State Universitygasweete/crj604/readings/2003-Smith... · 2010. 7. 16. · 366 SOCIOLOGICAL METHODS & RESEARCH public safety problems such

Smith, Brame / TOBIT MODELS IN SOCIAL SCIENCE RESEARCH 365

interest is directed toward the distribution of sentence severity in apopulation of convicted persons, then complications arise. The essen-tial issue is that for those individuals not sentenced to prison, all wecan observe is that sentence severity does not reach the threshold ofimprisonment. Actual sentence severity given that an individual isnot sentenced to prison is unobserved. Put differently, sentence sever-ity is said to be observed for those who are sentenced to prison andcensored for those who are not.

It is well known that use of standard tools such as estimating anordinary least squares regression equation on the subsample of indi-viduals above a censoring threshold will produce invalid inferences(see, e.g., Greene 1997:949-58; Maddala 1983:165-70). Becauseof this problem, researchers often use the tobit estimator (Tobin1958; Amemiya 1984:3-7) with censored dependent variables. A keyfeature of the tobit estimator is that it is based on two important piecesof information for each individual: (1) the probability that an individ-uals’ score on the dependent variable is above the censoring thresholdand (2) the density of the dependent variable given that an individ-ual scores above the censoring threshold. By explicitly incorporat-ing both pieces of information into the likelihood function, the tobitestimator provides consistent estimates of parameters governing thedistribution of a censored normal random outcome variable. Becausethe tobit estimator has desirable properties of consistency and asymp-totic efficiency, many researchers have relied on it to study variationin outcomes that can be plausibly viewed as censored normal randomvariables.

A brief review of the literature reveals that tobit models haveindeed been applied to a wide variety of problems. Criminologists,for example, have implemented tobit models in a number of settings.As suggested above, studies of criminal sentencing outcomes haveused tobit models to examine correlates of sentence severity (see, e.g.,Rhodes 1991; Albonetti 1997). Several studies have used tobit modelsto study the frequency of criminal offenses (Witte 1980; Paternosterand Triplett 1988) and scales measuring offending intentions (Klepperand Nagin 1989). Rhodes (1986) used a tobit model to study vari-ation in the time to recidivism among a sample of federal offen-ders on community supervision. Tobit models have also been appliedto psychometric problems (see, e.g., Nagin and Tremblay 1999),

at ARIZONA STATE UNIV on July 16, 2010smr.sagepub.comDownloaded from

Page 4: Sociological Methods & Research - Arizona State Universitygasweete/crj604/readings/2003-Smith... · 2010. 7. 16. · 366 SOCIOLOGICAL METHODS & RESEARCH public safety problems such

366 SOCIOLOGICAL METHODS & RESEARCH

public safety problems such as the severity of head and neck injuriessustained in motorcycle accidents (Goldstein 1986), blood alcohollevels (Keane, Maxim, and Teevan 1993), and a variety of other eco-nomic and sociological outcomes ranging from the number of extra-marital affairs (Fair 1978) to charitable contributions (Reece 1979; seethe discussion in Roncek 1992; Long 1997:196-216; and Amemiya1984:7-8 for additional background).

As suggested above, an important first principle on which the tobitestimator rests is the assumption that the dependent variable is a nor-mally distributed but incompletely observed outcome. It follows thatthe conventional tobit estimator assumes that the process generatingvariation in the censoring outcome (i.e., whether one’s score on thetrue outcome exceeds the censoring threshold) is the same as the pro-cess that generates variation in the dependent variable, conditionalon our being able to observe the outcome (Schmidt and Witte 1984:56-57). Returning to the sentence severity example discussed above,it is plausible that some of the factors that influence decisions aboutwhether to imprison an offender or not are different from the factorsthat influence decisions about the length of the sentence given thatimprisonment occurs. Unfortunately, within the framework of theconventional tobit estimator, it is not possible to address these kindsof issues. Instead, tobit parameter estimates essentially strike a com-promise between the estimates that would be obtained if variation inthe two components of the variable were studied separately (Roncek1992:504-505).

Lin and Schmidt (1984) have discussed the implications of therestrictive assumptions surrounding the tobit estimator and have sug-gested a Lagrange multiplier test statistic that can be used to assess itsplausibility in particular applications. They also noted that an alter-native specification of the tobit model from Cragg (1971) can be usedto form a likelihood ratio test of the hypothesis that the tobit propor-tionality assumption is consistent with the data. This latter point is acentral theme of this article.

The rest of our article is organized as follows. First, we provide anoverview of the structure of the tobit and Cragg specifications. We thenturn our attention to a Monte Carlo simulation study of the behaviorof the tobit estimator when the proportionality assumption is reason-able and when it fails. Based on this study, we argue that estimation

at ARIZONA STATE UNIV on July 16, 2010smr.sagepub.comDownloaded from

Page 5: Sociological Methods & Research - Arizona State Universitygasweete/crj604/readings/2003-Smith... · 2010. 7. 16. · 366 SOCIOLOGICAL METHODS & RESEARCH public safety problems such

Smith, Brame / TOBIT MODELS IN SOCIAL SCIENCE RESEARCH 367

of a tobit model should always be accompanied by estimation of aCragg specification. Our conclusion is based on two key results fromearlier literature and the simulation evidence presented in this article:(1) The Cragg specification explicitly accommodates the possibilitythat different processes generate the censoring outcome as well as theobserved variation in the outcome conditional on no censoring and(2) since the Cragg specification includes the tobit estimator as a spe-cial case, the Cragg specification reduces to the simpler tobit modelwhen the tobit assumption is met. In this latter case, the only disad-vantage of the Cragg specification in comparison to a tobit specifica-tion is a loss of efficiency. By imposing appropriate constraints on theCragg specification, however, this disadvantage can be avoided. Thesimulation study indicates that the validity of these constraints canbe investigated by the application of standard likelihood ratio testssince Cragg specifications are straightforward generalizations of thetobit model. Moreover, these tests are easy to conduct using standardstatistical software programs such as SAS, Limdep, and Stata.

Finally, to illustrate this test and the different insights that theCragg and tobit models can provide, we analyze data from the 1945Philadelphia Birth Cohort study conducted by Wolfgang, Figlio, andSellin (1972). Specifically, we use the tobit model to examine therelationship between offending seriousness and race. We then use theCragg specification to address the same problem. The analysis revealsthat the tobit and Cragg specifications lead to different inferencesabout the association between race and offense seriousness.

OVERVIEW OF TOBIT AND CRAGG ESTIMATORS

As noted above, the Cragg specification is a more general estima-tion framework than the tobit estimator is; in fact, it includes thetobit estimator as a special case. Although the details of this rela-tionship are described in the appendix, we briefly introduce somekey parameters here. First, the relationship between our observedoutcome variable, yi , and the latent outcome variable of interest is

yi ={

y∗i if y∗

i > 0

0, otherwise

at ARIZONA STATE UNIV on July 16, 2010smr.sagepub.comDownloaded from

Page 6: Sociological Methods & Research - Arizona State Universitygasweete/crj604/readings/2003-Smith... · 2010. 7. 16. · 366 SOCIOLOGICAL METHODS & RESEARCH public safety problems such

368 SOCIOLOGICAL METHODS & RESEARCH

for each of the i = 1, 2, . . . , N observations. Under a tobit model,the relationship between a vector of predictor variables, xi , and thelatent outcome variable, y∗

i is given by

y∗i = x

′iβ + σui,

where ui is a random, standard normal disturbance term, β is a vectorof tobit regression coefficients, and σ is the standard deviation ofthe disturbance term. Within this framework, the probability that anindividual is censored can be estimated by

Pr(y∗i > 0|xi; β, σ ) = �

(x

′iβ

σ

),

where �(·) is the standard cumulative normal distribution function,and the expected value of the outcome variable conditional on theindividual’s outcome score being above the censoring limit of zero is

E(yi|yi > 0, xi; β, σ

) = x′iβ + σ

(φ(x

′iβ/σ )

�(x′iβ/σ )

),

where φ(·) is the standard normal probability density function (Long1997:208). As these equations suggest, the same set of tobit coeffi-cients governs the probability distribution of the censoring outcomeand the expected value of the outcome variable given that an individ-ual’s outcome score is observed. The Cragg specification provides uswith a way to relax this constraint. Within this framework, the prob-ability that an individual is observed rather than censored is given by

Pr(y∗i > 0|xi; βP ) = �(x

′iβP ),

where the subscript P denotes that β is a vector of probit regressioncoefficients and the expected value of the outcome variable given thatit is above the censoring threshold of zero is

E(yi|yi > 0, xi; βT , σT

) = x′iβT + σT

(φ(x

′iβT /σT )

�(x′iβT /σT )

),

where the subscript T denotes that β is a vector of truncated normallinear regression coefficients. As a practical matter, then, the Craggspecification relaxes the constraint that the coefficients governing

at ARIZONA STATE UNIV on July 16, 2010smr.sagepub.comDownloaded from

Page 7: Sociological Methods & Research - Arizona State Universitygasweete/crj604/readings/2003-Smith... · 2010. 7. 16. · 366 SOCIOLOGICAL METHODS & RESEARCH public safety problems such

Smith, Brame / TOBIT MODELS IN SOCIAL SCIENCE RESEARCH 369

whether or not an individual is censored are directly proportional tothe coefficients governing the score on the outcome variable giventhat an individual is not censored. In addition, the Cragg specificationreduces to the tobit specification when the following equality holds:

βP = βT

σT

.

A likelihood ratio test of the validity of this restriction is formedby multiplying the difference between the log-likelihoods for a tobitmodel and a Cragg model by two and referring this statistic to a chi-squared distribution with degrees of freedom equal to the numberof restrictions imposed. We have provided additional details on thestructure of these log-likelihood functions and the likelihood ratio testin the appendix.

The above results highlight the structural congruence of the tobitmodel and the more general Cragg specification. Like Lin andSchmidt (1984:174), we believe that reliance on the tobit estimatorwithout evaluating results from the Cragg specification is, in general,ill advised. In the next section, we examine simulation evidencethat illustrates some of the general consequences of a tobit-centeredanalysis strategy in comparison to a Cragg-centered approach.

SIMULATION STUDY

To conduct our simulation experiment, we generated data for a cen-soring outcome, c∗

i , and a substantive outcome, y∗i , equation given

that an observation exceeded the censoring threshold. We also gener-ated two predictor variables, xi and zi . The details of this simulationprocedure are provided in the appendix.

Our analysis involves four distinct simulation studies. First, weconsider the case in which the tobit proportionality assumption is met.We then turn to the case in which the effects of the predictor variablesdiffer between the censoring and truncated regression processes butthe intercepts are constrained to be equal. Next, we reverse this caseand consider what happens when the intercepts differ but the effectsof the predictor variables are constrained to be the same (but forthe constant of proportionality) across the two processes. Finally,

at ARIZONA STATE UNIV on July 16, 2010smr.sagepub.comDownloaded from

Page 8: Sociological Methods & Research - Arizona State Universitygasweete/crj604/readings/2003-Smith... · 2010. 7. 16. · 366 SOCIOLOGICAL METHODS & RESEARCH public safety problems such

370 SOCIOLOGICAL METHODS & RESEARCH

TABLE 1: Comparison of Probit, Truncated Regression, and Tobit Coefficients WhenTrue Data–Generating Process Is Consistent With Tobit

Tobit Cragg

Mean of SD of Mean of SD ofParameter True Value Estimates Estimates Estimates Estimates

Probit effectsIntercept (αp) 0.00 0.001 0.032Coefficient on x(βp) 0.20 0.201 0.031Coefficient on z(βp) −0.20 −0.201 0.032

Truncated normal/tobit effects

Intercept (αT ) 0.00 0.002 0.077 −0.021 0.451Coefficient on x(βT ) 0.50 0.503 0.067 0.507 0.169Coefficient on z(βT ) −0.50 −0.497 0.069 −0.484 0.155σ 2.50 2.498 0.072 2.499 0.180

we consider the case in which both the intercepts and the regressioncoefficients have different effects across the censoring and substantiveoutcomes.

SIMULATION 1: TOBIT PROPORTIONALITY ASSUMPTION IS MET

In our first simulation experiment, we confront the case in which thetobit and Cragg specifications should yield virtually identical results.Table 1 presents the true values associated with the parameters for theequations with c∗

i and yi described above. A summary of the resultsof the simulation study is also presented in Table 1. Since the trun-cated regression coefficients normalized by σ are equal to the pro-bit coefficients, we expect the tobit and Cragg specifications to yieldthe same results (on average). Indeed, application of the likelihoodratio test described above rejects only the tobit hypothesis of propor-tional coefficients with chance frequency (5.2% for this particular setof simulations; all tests were conducted with a two-tailed 95%confidence level). Table 1 indicates that all of the estimators provideconsistent estimates of the parameters used to generate the data. Areview of the empirical standard deviations of the coefficients inTable 1 highlights the one major advantage of using the tobit esti-mator when the proportionality assumptions underlying the tobitmodel are met: The tobit estimator provides more efficient estimates

at ARIZONA STATE UNIV on July 16, 2010smr.sagepub.comDownloaded from

Page 9: Sociological Methods & Research - Arizona State Universitygasweete/crj604/readings/2003-Smith... · 2010. 7. 16. · 366 SOCIOLOGICAL METHODS & RESEARCH public safety problems such

Smith, Brame / TOBIT MODELS IN SOCIAL SCIENCE RESEARCH 371

TABLE 2: Comparison of Probit, Truncated Regression, and Tobit Estimates WhenIntercepts Are Equal But Regression Coefficients Differ

Tobit Cragg

Mean of SD of Mean of SD ofParameter True Value Estimates Estimates Estimates Estimates

Probit effectsIntercept (αp) −0.50 −0.501 0.038Coefficient on x(βp) 0.50 0.501 0.038Coefficient on z(βp) −0.20 −0.199 0.036

Truncated normal/tobit effects

Intercept (αT ) −1.25 −1.161 0.109 −1.275 0.999Coefficient on x(βT ) 0.00 0.960 0.080 −0.003 0.230Coefficient on z(βT ) −0.50 −0.453 0.074 −0.502 0.231σ 2.50 2.319 0.090 2.488 0.300

(i.e., the sample-to-sample variability of the tobit estimates is smallerthan the variability of the truncated regression coefficients).

SIMULATION 2: INTERCEPTS EQUAL BUT PREDICTORSHAVE DIFFERENT EFFECTS

In Table 2, we begin to depart from the assumptions of the tobitestimator by allowing the effect of xi to differ between the probit andtruncated processes. In other words, Table 2 relaxes the constraint thatβp = βT/σ . Examination of this table suggests that the tobit estimatorwould, on average, lead us to the conclusion that xi is associated withboth the censoring outcome and the distribution of yi above the limitwhen, in fact, only the former relationship actually exists. Separatestudy of the truncated regression and probit coefficients in Table 2, onthe other hand, would, on average, lead us to the correct conclusion:xi is associated with the censoring outcome but not the conditionaldistribution of yi .

SIMULATION STUDY 3: INTERCEPTS DIFFER BUT PREDICTORSHAVE SAME EFFECTS

In Table 3, we consider the interesting case in which the probit andtruncated regression coefficients are proportional but the interceptson the two outcomes are not. This analysis shows that the normalized

at ARIZONA STATE UNIV on July 16, 2010smr.sagepub.comDownloaded from

Page 10: Sociological Methods & Research - Arizona State Universitygasweete/crj604/readings/2003-Smith... · 2010. 7. 16. · 366 SOCIOLOGICAL METHODS & RESEARCH public safety problems such

372 SOCIOLOGICAL METHODS & RESEARCH

TABLE 3: Comparison of Probit, Truncated Regression, and Tobit Estimates WhenIntercepts Differ But Regression Coefficients Are Equal

Tobit Cragg

Mean of SD of Mean of SD ofParameter True Value Estimates Estimates Estimates Estimates

Probit effectsIntercept (αp) −1.00 −1.002 0.40Coefficient on x(βp) 0.20 0.196 0.402Coefficient on z(βp) −0.20 −0.199 0.39

Truncated normal/tobit effects

Intercept (αT ) 0.00 −3.686 0.247 0.026 0.704Coefficient on x(βT ) 0.50 0.743 0.148 0.502 0.269Coefficient on z(βT ) −0.50 −0.751 0.139 −0.489 0.271� 2.50 3.701 0.167 2.468 0.275

tobit coefficients closely match the probit coefficients but they do notapproach the true truncated regression coefficients. As in the previousanalyses, Table 3 suggests that separate estimation of the probit andtruncated regression models will yield consistent estimates of theparameters used to generate the data while the tobit estimates couldbe quite misleading.

SIMULATION STUDY 4: INTERCEPTS DIFFER AND PREDICTORSHAVE DIFFERENT EFFECTS

Our final simulation study, presented in Table 4, considers the casein which the intercepts and the coefficients on xi differ between thecensoring outcome and the conditional distribution of yi . The tobitestimate of the effect of xi would be particularly misleading in thiscase since the average estimated coefficient of .076 is driven entirelyby the fact that there is no association between xi and the censor-ing outcome; unknown to the analyst who estimates only the tobitmodel is the fact that the true effect of xi on the conditional distribu-tion of yi is identical in absolute magnitude to the effect of zi on thatsame outcome. Separate estimation of the probit and truncated regres-sion coefficients, on the other hand, would again produce consistentestimates of all parameters.

at ARIZONA STATE UNIV on July 16, 2010smr.sagepub.comDownloaded from

Page 11: Sociological Methods & Research - Arizona State Universitygasweete/crj604/readings/2003-Smith... · 2010. 7. 16. · 366 SOCIOLOGICAL METHODS & RESEARCH public safety problems such

Smith, Brame / TOBIT MODELS IN SOCIAL SCIENCE RESEARCH 373

TABLE 4: Comparison of Probit, Truncated Regression, and Tobit Estimates WhenIntercepts and Regression Coefficients Differ

Tobit Cragg

Mean of SD of Mean of SD ofParameter True Value Estimates Estimates Estimates Estimates

Probit effectsIntercept (αp) −1.00 −1.004 0.038Coefficient on x(βp) 0.00 0.003 0.037Coefficient on z(βp) −0.20 −0.201 0.040

Truncated normal/tobit effects

Intercept (αT ) 0.00 −3.691 0.239 0.003 0.748Coefficient on x(βT ) 0.50 0.076 0.132 0.499 0.284Coefficient on z(βT ) −0.50 −0.757 0.145 0.496 0.274σ 2.50 3.696 0.162 2.458 0.300

SUMMARY OF SIMULATION RESULTS

In sum, these simulation results suggest that the cost of at leastinvestigating whether a tobit model is consistent with the data is low.Therefore, we believe that researchers using a tobit model shouldconduct a test for whether the Cragg specification is more consis-tent with the available data. If the tobit specification turns out to beconsistent with the data, then it will be the more efficient estimator.But since the tobit specification can be estimated by simply impos-ing parameter constraints on the Cragg specification, the argumentsin favor of using the Cragg specification as an estimator of first resortwould seem to be quite strong. To further illustrate the application ofthis strategy, we turn next to a case study in which both the Cragg andtobit specifications are estimated on the same data set and the use of alikelihood ratio test indicates that a Cragg specification is preferable.

A CASE STUDY

OVERVIEW

An enduring controversy within the field of criminology is whetherresearchers should disaggregate different components of individualcriminal careers. For example, some researchers believe that the pro-cesses leading to the initiation or onset of criminal offending may

at ARIZONA STATE UNIV on July 16, 2010smr.sagepub.comDownloaded from

Page 12: Sociological Methods & Research - Arizona State Universitygasweete/crj604/readings/2003-Smith... · 2010. 7. 16. · 366 SOCIOLOGICAL METHODS & RESEARCH public safety problems such

374 SOCIOLOGICAL METHODS & RESEARCH

differ from the processes that produce variation in the quality orquantity of offending that occurs after onset (see, e.g., Blumstein et al.1986). At the same time, other researchers believe that differences incriminality are more a matter of degree than of kind and that such dis-tinctions are unnecessary and distracting (see, e.g., Gottfredson andHirschi 1990). As Nagin and Smith (1990) have shown, variations ona tobit framework can profitably be used to investigate the proprietyof these competing claims.

In this section, we continue to press forward on this issue by exam-ining the association between race on both the initiation and theseriousness of criminal offending. A major theme in the literature oncriminal careers is that demographic correlates of crime tend to bemore strongly associated with initiation of offending than with vari-ous dimensions of the criminal career (i.e., frequency, duration, andseriousness) after onset has already occurred (Blumstein and Cohen1987). Our analysis will provide a test of this hypothesis based ondata from the 1945 Philadelphia birth cohort study conducted byMarvin Wolfgang and his colleagues (Wolfgang et al. 1972). This dataset includes police contact information through age 17 on all 9,944boys who were born in Philadelphia, Pennsylvania, in 1945 and whoresided in the city between the ages of 10 and 17.

For each recorded police contact, the investigators created a seri-ousness score that takes physical harm as well as property loss intoaccount. Individuals who did not have any police contacts wereassigned a seriousness score of zero while individuals with at least onecontact were assigned a seriousness score that represents the sum ofthe seriousness scores associated with each of their individual policecontacts. For additional details on the calculation of this score, thereader is referred to Wolfgang et al. (1972) and Sellin and Wolfgang(1964). Previous analyses of the seriousness score in these data haverevealed that Blacks have higher average offense seriousness scoresthan do Whites. Nevertheless, a formal test of whether the race differ-ences in offending participation are proportional to those in offenseseriousness (i.e., whether a single statistical model can explain bothparticipation and seriousness conditional on participation) has not yetbeen conducted.

Because of the highly skewed nature of this score, our analy-sis will focus on the distribution of the natural logarithmic

at ARIZONA STATE UNIV on July 16, 2010smr.sagepub.comDownloaded from

Page 13: Sociological Methods & Research - Arizona State Universitygasweete/crj604/readings/2003-Smith... · 2010. 7. 16. · 366 SOCIOLOGICAL METHODS & RESEARCH public safety problems such

Smith, Brame / TOBIT MODELS IN SOCIAL SCIENCE RESEARCH 375

TABLE 5: Descriptive Statistics From Philadelphia Cohort Data (N = 9,787)

ItemBlack Males White Males(n = 2,745) (n = 7,042)

Number of individuals with sT > 0 1,449 2,017Percentage of individuals with sT > 0 52.8 28.6Mean of sT including cases with sT = 0 2.094 0.937Mean of sT excluding cases with sT = 0 3.967 3.272

transformation of this score with a constant of 1 added to each individ-ual’s raw score to avoid evaluating the natural logarithm of zero (i.e.,sT

i = loge[si + 1]). Our analysis is confined to the 9,787 boys whowere identified as Black (n = 2,745) or White (n = 7,042) by theinvestigators. For purposes of our regression model, we set our racevariable equal to 1.0 for Blacks and 0.0 for Whites. Table 5 presentssome descriptive statistics associated with the seriousness score (i.e.,sT

i ) for both the Blacks and Whites in our sample.

ANALYSIS RESULTS

In Table 6, we present the results of our analysis, which include theparameter estimates from three different statistical models. Model 1imposes the constraint that both the intercept and race parameterestimates are equal up to the constant of proportionality, σ , betweenthe probit and truncated regression components. Model 2 relaxes thisconstraint and allows for the possibility that the process that leadsto at least one police contact is proportional to offending seriousnessgiven that a contact occurred. In other words, Model 1 is the tobitspecification while Model 2 is a Cragg specification. All of thesemodels suggest that Blacks are more likely than Whites are to haveat least one contact and that Blacks accumulate higher seriousnessscores than Whites do. With this set of results in hand, we now turn toour substantive question about whether the association between raceand the probability of an initial contact is proportional to the associ-ation between race and seriousness of offending given that an initialcontact has occurred.

Table 6 also presents the results of a likelihood ratio test. This testindicates that the model that allows for differences in both the racecoefficients and the intercepts is most consistent with the Philadelphia

at ARIZONA STATE UNIV on July 16, 2010smr.sagepub.comDownloaded from

Page 14: Sociological Methods & Research - Arizona State Universitygasweete/crj604/readings/2003-Smith... · 2010. 7. 16. · 366 SOCIOLOGICAL METHODS & RESEARCH public safety problems such

376 SOCIOLOGICAL METHODS & RESEARCH

TABLE 6: Comparison of Probit, Truncated Regression, and Tobit EstimatesWith Race and Log Offense Seriousness Scores From 1945Philadelphia Birth Cohort Study (N = 9,787)

Tobit CraggSpecification Specification

Parameter ParameterParameter Estimate SE Estimate SE

Probit effectsIntercept/σ −1.069 0.034Race coefficient/σ 1.202 0.058

Truncated normal/Tobit effects

Intercept −2.232 0.083 3.054 0.050Race coefficient 2.764 0.114 0.817 0.071σ 4.38 0.061 1.896 0.029Log-likelihood –13474.88 -12847.77Likelihood ratio testHo: tobit specification is χ2 = 1254.22, df = 2, p < .05consistent with the data

data. In addition, the results associated with this model suggest thatthe association between race and the likelihood of an initial contactis actually somewhat stronger than the association between race andoffense seriousness. It could be argued that these likelihood ratio testsare extremely powerful because they are based on a large sample sizeand that, as such, they place an unnecessary emphasis on a smalldifference.

To address this issue, we present some additional calculations inTable 7. First, we assess the extent to which our tobit estimates areactually being driven by variation in censoring or variation in theoutcome conditional on censoring. We calculate this using equa-tion (2) (from our appendix) as described by McDonald and Moffitt(1980:319). The results of this analysis suggest that only a minorityof the total variation in the outcome is due to the variation that isabove the censoring limit of zero. For Whites, the proportion is 26.7%while, for Blacks, it is 39.1%. This tells us that the tobit estimates aremost heavily informed by variation in the censoring outcome ratherthan by variation in the logarithm of the seriousness scores above thecensoring limit. Based on this evidence, we conclude that our tobitestimates are heavily influenced by variation in censoring.

at ARIZONA STATE UNIV on July 16, 2010smr.sagepub.comDownloaded from

Page 15: Sociological Methods & Research - Arizona State Universitygasweete/crj604/readings/2003-Smith... · 2010. 7. 16. · 366 SOCIOLOGICAL METHODS & RESEARCH public safety problems such

Smith, Brame / TOBIT MODELS IN SOCIAL SCIENCE RESEARCH 377

TABLE 7: Interpretation of Race-Offense Seriousness Associ-ation From Tobit and Cragg Specifications

Description Estimate

Tobit-based estimatesp(st > 0 | Race = Black) 0.548p(st > 0 | Race = White) 0.305

Difference 0.243E(st |st > 0, Race = Black) 3.696E(st |st > 0, Race = White) 2.797

Difference 0.899

Cragg-based estimatesp(st > 0 | Race = Black) 0.528p(st > 0 | Race = White) 0.286

Difference 0.242E(st |st > 0, Race = Black) 3.967E(st |st > 0, Race = White) 3.272

Difference 0.695

Second, we turn to the difference in the probability of anobservation being above the censoring limit when calculated from themaximum likelihood probit coefficients in comparison to the calcu-lation based on the normalized maximum likelihood tobit coefficients.Here, we find very little difference between inferences based on theprobit and tobit estimates. Based on the probit coefficients, the effectof race on censoring is equal to p(sT > 0|x = 1; βp) − p(sT >

0|x = 0; βp) = .528 − .286 = .242 while the tobit estimates yieldp(sT > 0|x = 1; βTobit , σ ) − p(sT > 0|x = 0; βTobit , σ ) =.548 − .305 = .243. Since we already know that the tobit estimatesare heavily weighted toward the probit part of the likelihood func-tion, this is not a surprising result. In sum, we do not see an importantdifference between the probit and tobit models when studying varia-tion in the censoring outcome after conditioning on race.

Next, we calculate the difference in the overall expectation of theoutcome variable between Blacks and Whites from the probit andthe Cragg specifications. We rely on equations (1) and (3) (fromour appendix) above to calculate the expected values. From the tobitmodel, the difference is

E(sT |sT > 0, x = 1; βTobit , σ )

− E(sT |sT > 0, x = 0; βTobit , σ )

= 3.696 − 2.797 = 0.899

at ARIZONA STATE UNIV on July 16, 2010smr.sagepub.comDownloaded from

Page 16: Sociological Methods & Research - Arizona State Universitygasweete/crj604/readings/2003-Smith... · 2010. 7. 16. · 366 SOCIOLOGICAL METHODS & RESEARCH public safety problems such

378 SOCIOLOGICAL METHODS & RESEARCH

while the same calculation based on parameter estimates from theCragg specification is given by

E(sT |sT > 0, x = 1; βp, βT , σ ) − E(sT |sT > 0, x = 0;βp, βT , σ ) = 3.967 − 3.272 = 0.695,

which represents a 23% difference between the effects implied by thetwo specifications.

SUMMARY OF EMPIRICAL ANALYSIS

There are several important points to make about these results.First, by using the likelihood ratio test from a comparison of the tobitand Cragg specifications, it appears that the relationship between raceand onset of offending is not proportional to the relationship betweenrace and seriousness given that onset has occurred. In other words,our test of the tobit specification suggests that it is not consistent withthe Philadelphia data.

Second, once we rejected the tobit proportionality hypothesis, weinvestigated the substantive differences between the tobit and Craggspecifications. This analysis revealed virtually no difference in theestimated probability distribution of our censoring variable withineach race group across the two model specifications. Moreover, inboth instances, the Black-White difference in the probability of par-ticipation was estimated to be about 0.24. We commented that theseresults are not particularly surprising because the majority of the infor-mation being contributed to the likelihood function for both Blacksand Whites is participation information rather than seriousness infor-mation. Thus, the tobit results are heavily weighted by the probitcomponent of the likelihood function. An important implication ofthis result is that a comparison of tobit and probit coefficients willnot necessarily say very much about differences between probit andtruncated regression coefficients.

This becomes even clearer when we examine differences betweenthe expected offense severity calculated via the tobit model and thetruncated regression model. First, the expected offense seriousnessscores are uniformly higher under the truncated regression model thanthey are under the tobit model. Second, the difference between Blacksand Whites is overstated by about 23% in the tobit model compared

at ARIZONA STATE UNIV on July 16, 2010smr.sagepub.comDownloaded from

Page 17: Sociological Methods & Research - Arizona State Universitygasweete/crj604/readings/2003-Smith... · 2010. 7. 16. · 366 SOCIOLOGICAL METHODS & RESEARCH public safety problems such

Smith, Brame / TOBIT MODELS IN SOCIAL SCIENCE RESEARCH 379

to the Black-White difference in the Cragg specification. In this case,the Cragg specification turns out to be an important device for thedevelopment of valid inferences about between-race differences inoffense participation and seriousness patterns.

DISCUSSION AND CONCLUSIONS

Tobit models have played a prominent role in social science researchover the past two decades. In this article, we have revisited some of theassumptions on which the tobit estimator is based. This review showsthat the validity of any implementation of a tobit estimator dependson whether the process that generates variation in censoring is pro-portional to the process that generates variation in the distribution ofthe dependent variable conditional on its being observed. When thetobit proportionality assumption is violated, the tobit estimator canbe highly misleading. Specifically, parameter estimates from the tobitmodel will tend to be compromises between estimates that wouldbe obtained from separate estimation of probit and truncated regres-sion models (i.e., Cragg’s specification). Whenever the differencesbetween what would be obtained in the probit and truncated regres-sion models are small, then we can expect the bias that is induced bythe tobit estimator to be small. Indeed, when the tobit specificationis consistent with the data, our simulations suggest that gains in effi-ciency over the Cragg specification can be quite important. Since wecan obtain the tobit specification by simply imposing parameter con-straints on the Cragg specification and since we are likely to encountersubstantial bias with tobit when the proportionality assumption is notmet, it seems unlikely that the use of the tobit model by itself couldbe justified.

There are, of course, a number of other potential pitfalls that con-front researchers who use tobit models that we have not addressed inthis article. Nonnormality and heteroscedasticity in the latent variableon which the tobit estimator is based, for example, appear to haveparticularly adverse consequences for the use of tobit estimators. AsGreene (1997:967-72) has noted, these kinds of problems tend to havemostly efficiency implications for ordinary least squares estimators.With tobit estimators, they lead to substantial bias and inconsistency.

at ARIZONA STATE UNIV on July 16, 2010smr.sagepub.comDownloaded from

Page 18: Sociological Methods & Research - Arizona State Universitygasweete/crj604/readings/2003-Smith... · 2010. 7. 16. · 366 SOCIOLOGICAL METHODS & RESEARCH public safety problems such

380 SOCIOLOGICAL METHODS & RESEARCH

In addition, the tobit estimator assumes that the censoring mechanismand the continuous realizations of the observed outcome can factorinto independent likelihood functions. If there is some type of sampleselection process at work, however, this type of factorization may notmake very much sense. In its place, a framework that allows for sometype of correlation between the selection mechanism and the contin-uous outcome would be desirable, but the identification demands ofthese types of models are great (see, e.g., Little and Rubin 1987). Inshort, there are a number of complicated issues associated with tobitmodels, and we have addressed only one of those issues in this article.

Nevertheless, based on our review of the literature and the evidencepresented in this article, we conclude that the tobit model shouldnot be used until both probit and truncated regression estimates havebeen examined separately. Ideally, the likelihood ratio test discussedearlier in this article would be used to decide whether the tobit modelis appropriate or whether the more complex two-equation Cragg spec-ification is preferable. In closing, we note that until recently, the avail-ability of software to estimate the truncated regression model mayhave presented obstacles to many researchers. This currently seems tobe less of a problem since a number of major statistical software pack-ages such as SAS, Limdep, and Stata now provide tools that can beeasily used to estimate tobit, truncated regression, and probit models.We believe that social science research will benefit from increaseduse of these tools in the years ahead.

APPENDIX

DETAILS OF TOBIT AND CRAGG SPECIFICATIONS

Consider the case in which the outcome of interest is a standardnormal random variable censored at zero. The uncensored version ofthe normal random variable has a mean of zero and a standard devi-ation of one. Our objective is to develop an inference about popula-tion parameters associated with this outcome on the basis of statisticscomputed within a representative sample of that population. Withinthis sample, there are two kinds of observations. For those who are

at ARIZONA STATE UNIV on July 16, 2010smr.sagepub.comDownloaded from

Page 19: Sociological Methods & Research - Arizona State Universitygasweete/crj604/readings/2003-Smith... · 2010. 7. 16. · 366 SOCIOLOGICAL METHODS & RESEARCH public safety problems such

Smith, Brame / TOBIT MODELS IN SOCIAL SCIENCE RESEARCH 381

censored, the outcome of interest is not fully observed. We know thatthe censored individuals have scores on the dependent variable thatdo not exceed the censoring threshold, but we do not know anythingelse about their outcome. For those who have scores on the dependentvariable that do exceed the censoring threshold, however, we are ableto observe their actual scores on the dependent variable. In sum, weobserve the following outcome:

yi ={

y∗i if y∗

i > 0

0, otherwise.

For convenience, the censoring threshold is set to zero, but in gen-eral, this need not be the case. Next, assume that we are interested instudying the relationship between the outcome and a set of covariatesthat we denote as x. Specifically, we wish to estimate the parametersof the equation:

y∗i = x

′iβ + σui

for each of the i = 1, 2, . . . , N individuals, where ui are indepen-dently drawn from the N(0, 1) distribution and β is a vector of regres-sion coefficients conformable for multiplication with xi . Because they∗

i are observed only for uncensored cases, it is necessary to break theproblem of obtaining these estimates into two parts. First, for eachuncensored case, we need to calculate the probability that y∗

i is abovethe censoring threshold:

p(yi > 0|xi; β, σ ) = �(x′iβ/σ ),

where �(·) is the cumulative normal distribution function. Second, foreach censored case, we need to calculate the probability that yi ≤ 0(i.e., the probability that y∗

i is censored at zero):

p(yi = 0|xi; β, σ ) = �(−x′iβ/σ )

(see, e.g., Greene 1997:970). A more general version of this formulathat allows for censoring at other points besides zero is discussed byLong (1997:205). Under this version, we have

p(y∗i ≤ τ |xi; β, σ, τ ) = �

(τ − x

′iβ

σ

).

at ARIZONA STATE UNIV on July 16, 2010smr.sagepub.comDownloaded from

Page 20: Sociological Methods & Research - Arizona State Universitygasweete/crj604/readings/2003-Smith... · 2010. 7. 16. · 366 SOCIOLOGICAL METHODS & RESEARCH public safety problems such

382 SOCIOLOGICAL METHODS & RESEARCH

Third, for each censored case, the density of yi given that the scoreon the outcome is above the censoring threshold (fixed at zero) iscalculated by

f (yi|yi > 0, xi; β, σ ) =(

1

�(x′iβ/σ )

(1√

2πσ

)

× exp

(− 1

2σ 2 (yi − x′iβ)2

)

(see, e.g., Johnson and Kotz 1970:81; Lin and Schmidt 1984:174).Under the more general version allowing for any censoring threshold,we have

f (yi|yi > 0, xi; β, σ, τ ) = f (y∗i |xi; β, σ )

Pr(y∗i > τ)

=1σ × φ(

y∗i −xi

′βσ )

1 − �(τ−xi

′βσ )

,

where φ(·) is the standard normal probability density function eval-uated at (y∗

i − xiβ)/σ (as discussed by Long 1997:194) and τ isthe general censoring threshold. Most of what follows refers to thespecial case where τ = 0, but this constraint can always be relaxed.With the censoring probabilities as well as the conditional density ofthe yi in hand, the log-likelihood function on the data (with censoringat zero) is written as

log e[L(β, σ |yi)] =n0∑

i = 1

log e[�(−x′iβ/σ )]

+N∑

i = n0 + 1

loge[�(x′iβ/σ )]

+N∑

i = n0 + 1

log e

[(1

�(x′iβ/σ )

(1√

2πσ

)

× exp

(− 1

2σ 2 (yi − xi′β)2

)],

at ARIZONA STATE UNIV on July 16, 2010smr.sagepub.comDownloaded from

Page 21: Sociological Methods & Research - Arizona State Universitygasweete/crj604/readings/2003-Smith... · 2010. 7. 16. · 366 SOCIOLOGICAL METHODS & RESEARCH public safety problems such

Smith, Brame / TOBIT MODELS IN SOCIAL SCIENCE RESEARCH 383

where the first summation is over all censored cases, the second andthird summations are over all uncensored (observed) cases, and loge[·]is the natural logarithm. Maximizing the log-likelihood function fromthis equation yields the usual tobit maximum likelihood estimates ofthe parameters, β and σ . The validity of these estimates, however,depends crucially on the assumption that the processes generating thecensoring event and the conditional density of yi are equal up to aconstant of proportionality, σ . As Cragg (1971) and Lin and Schmidt(1984) have noted, this is a restrictive assumption. Given the form ofthe likelihood function above, however, it is an easy assumption totest. To do this, we obtain the probability distribution of the censoringvariable by

p(yi > 0|xi; βp) = �(x′iβp)

and

p(yi = 0|xi; βp) = �(−x′iβp),

where βp is a set of probit regression coefficients that capture theeffects of covariates on the probability that a case is above the censor-ing threshold of zero. We then rewrite the conditional density of theoutcome variable given that a case is uncensored (with a threshold ofzero) by

f (yi|yi > 0, xi; βT , σ ) =(

1

�(xi′βT /σ)

(1√

2πσ

)

× exp

(− 1

2σ 2 (yi − xi′βT )2

),

where βT makes explicit the point that the parameter estimates aretruncated-at-zero normal linear regression coefficients. Since the like-lihood factors into separate components for the censoring event andthe conditional density, we can, following Cragg (1971) and Lin andSchmidt (1984), rewrite the likelihood as

loge[L(βp, βT , σ |yi)] =n0∑

i = 1

loge[�(−x′iβp)]

+N∑

i = n0 + 1

log e[�(x′iβp)]

at ARIZONA STATE UNIV on July 16, 2010smr.sagepub.comDownloaded from

Page 22: Sociological Methods & Research - Arizona State Universitygasweete/crj604/readings/2003-Smith... · 2010. 7. 16. · 366 SOCIOLOGICAL METHODS & RESEARCH public safety problems such

384 SOCIOLOGICAL METHODS & RESEARCH

+N∑

i = n0 + 1

loge

[(1

�(x′iβT /σ )

(1√

2πσ

)

× exp

(− 1

2σ 2 (yi − x′iβT )2

)],

and, as Lin and Schmidt (1984:174) note, in the special case whenβp = βT /σ , this specification reduces to the standard tobit likelihoodfunction. In these kinds of situations, the more complicated Cragg(1971) specification would be unnecessary. In general, however, max-imizing this likelihood function will produce consistent estimates ofthe population values of βp and βT regardless of whether the pro-cesses generating the censoring event and the conditional density ofthe outcome variable are equal or not.

Using the above framework, it is possible to formally assess theplausibility of the tobit assumption of proportional data-generatingprocesses for the censoring event and the conditional density of yi

given that yi > 0. Three steps are required for this test:

1. Maximize the Cragg likelihood function subject to the constraint thatβp = βT /σ .

2. Maximize the Cragg likelihood function allowing βp and βT to beestimated separately.

3. Obtain the test statistic by multiplying the difference between thelog-likelihood values of Steps 1 and 2 by 2.0; refer this calculationto the chi-square distribution where the number of degrees of free-dom is the difference between the number of parameter estimates inSteps 1 and 2. If the test statistic exceeds the critical value of the chi-square distribution, then reject the tobit model in favor of the Craggspecification and conclude that βp �= βT /σ .

Additional discussion of this test is provided by Greene (1997:970;see also Lin and Schmidt 1984). Within the framework of the Cragglikelihood function, the coefficients in βp and βT can be inter-preted as probit and truncated normal linear regression coefficients,respectively. Long (1997:194) showed that the expected value of the

at ARIZONA STATE UNIV on July 16, 2010smr.sagepub.comDownloaded from

Page 23: Sociological Methods & Research - Arizona State Universitygasweete/crj604/readings/2003-Smith... · 2010. 7. 16. · 366 SOCIOLOGICAL METHODS & RESEARCH public safety problems such

Smith, Brame / TOBIT MODELS IN SOCIAL SCIENCE RESEARCH 385

dependent variable conditional on the dependent variable not beingcensored at zero is given by

E(yi|yi > 0, xi; β, σ ) = x′iβ + σ

(φ(x

′iβ/σ )

�(x′iβ/σ )

), (1)

where φ(·) is the standard normal probability density function eval-uated at x

′iβ/σ . In addition, it is possible to calculate the fraction of

the mean total response that is due to the response above the censor-ing limit of zero (see, e.g., McDonald and Moffitt 1980:319). To theextent that this number approaches zero, the tobit estimates will bemost greatly informed by the censoring outcome; to the extent thatthis number approaches one, the tobit estimates will be most greatlyinformed by the variation in the outcome above the censoring limit.The calculation is given by

Response Variation Above Censoring Limit

Total Response Variation= 1 − x

′iβ

σ

× φ(x′iβ/σ )

�(x′iβ/σ )

− φ(x′iβ/σ )2

�(x′iβ/σ )2

. (2)

Finally, in the case of the Cragg specification, the conditionalexpectation is

E(yi|yi > 0, xi; βT , σ ) = xi′βT + σ

(φ(xi

′βT /σ)

�(xi′βT /σ)

), (3)

where βT is the set of truncated-at-zero normal linear regressioncoefficients.

MONTE CARLO SIMULATION STUDY

The general framework for the simulation experiments is fairlysimple. In all of our simulations, we generated 500 samples with 1,500cases in each sample. As detailed below, our simulations were basedon the acceptance-rejection method described by Fishman (1996:172)and Morgan (1984:95-107). In addition, two equations have to be

at ARIZONA STATE UNIV on July 16, 2010smr.sagepub.comDownloaded from

Page 24: Sociological Methods & Research - Arizona State Universitygasweete/crj604/readings/2003-Smith... · 2010. 7. 16. · 366 SOCIOLOGICAL METHODS & RESEARCH public safety problems such

386 SOCIOLOGICAL METHODS & RESEARCH

defined for each simulation experiment. First, we define an equationthat governs the censoring outcome as

c∗i = αp + βpxi + γpzi + ui,

where xi and zi are both N(0, 1) random variables and cov(xi, zi)= 0.The disturbances, ui , are also drawn from the N(0, 1) distribution.The censoring rule is given by

ci ={

1 if c∗i > 0

0, otherwise.

When ci = 1, the case is treated as observed while cases with ci = 0are treated as censored. Within the subsample of individuals withci = 1, we have to set up a process to govern the variation in thepartially observed dependent variable, yi :

wi = αT + βT xi + γ T zi.

Note the absence of a disturbance term associated with this equa-tion. To introduce a random component to this function with knownproperties, we generate uniform random numbers, Ui , on the interval[a, b], where a is the lower limit of truncation (0, in this case) and b

is the maximum value that we would want our yi to attain. For eachcase, we then use both wi and Ui to calculate

q(wi, Ui, σ ) =1σ ×

(1√2π

)× exp

(− (

Ui−wiσ

)2

2

)

1 − �(−wi

σ

) .

Finally, for each case, we compare q(·) to a uniform randomnumber on the [0,1] interval. When the random number is lessthan or equal to q(·), then yi is set to equal Ui . Otherwise,the process is repeated until the [0,1] uniform random numberis less than or equal to q(·). We repeat this process for all ofthe observations that are observed, and we exclude those obser-vations that are censored. Therefore, only those observations thatare above the censoring threshold actually receive a value of yi .

at ARIZONA STATE UNIV on July 16, 2010smr.sagepub.comDownloaded from

Page 25: Sociological Methods & Research - Arizona State Universitygasweete/crj604/readings/2003-Smith... · 2010. 7. 16. · 366 SOCIOLOGICAL METHODS & RESEARCH public safety problems such

Smith, Brame / TOBIT MODELS IN SOCIAL SCIENCE RESEARCH 387

REFERENCES

Albonetti, Celesta. 1997. “Sentencing Under the Federal Sentencing Guidelines: Effects ofDefendant Characteristics, Guilty Pleas, and Departures on Sentence Outcomes for DrugOffenses, 1991-1992.” Law and Society Review 31:789-822.

Amemiya, Takeshi. 1984. “Tobit Models: A Survey.” Journal of Econometrics 24:3-61.Blumstein, Alfred and Jacqueline Cohen. 1987. “Characterizing Criminal Careers.” Science

237:985-91.Blumstein, Alfred, Jacqueline Cohen, Jeff Roth, and Christy Visher. 1986. Criminal Careers

and “Career Criminals.” Washington, DC: National Academy Press.Cragg, John G. 1971. “Some Statistical Models for Limited Dependent Variables With Appli-

cation to the Demand for Durable Goods.” Econometrica 39:829-44.Fair, Ray C. 1978. “A Theory of Extramarital Affairs.” Journal of Political Economy 86:45-61.Fishman, George S. 1996. Monte Carlo: Concepts, Algorithms, and Applications. New York:

Springer-Verlag.Goldstein, J. P. 1986. “The Effect of Motorcycle Helmet Use on the Probability of Fatality and

the Severity of Head and Neck Injuries.” Evaluation Review 10:355-75.Gottfredson, Michael R. and Travis Hirschi. 1990. A General Theory of Crime. Stanford, CA:

Stanford University Press.Greene, William H. 1997. Econometric Analysis. 3d ed. Englewood Cliffs, NJ: Prentice Hall.Johnson, Norman and Samuel Kotz. 1970. Univariate Continuous Distributions. New York:

Wiley.Keane, Carl, P. S. Maxim, and J. J. Teevan. 1993. “Drinking and Driving, Self-Control, and

Gender.” Journal of Research in Crime and Delinquency 30:30-46.Klepper, W. and Daniel S. Nagin. 1989. “Tax Compliance and Perceptions of the Risk of

Detection and Criminal Prosecution.” Law and Society Review 23:209-40.Lin, Tsai-Fen and Peter Schmidt. 1984. “A Test of the Tobit Specification Against an Alternative

Suggested by Cragg.” Review of Economics and Statistics 66:174-77.Little, Roderick J. A. and Donald B. Rubin. 1987. Statistical Analysis With Missing Data.

New York: Wiley.Long, J. Scott. 1997. Regression Models for Categorical and Limited Dependent Variables.

Thousand Oaks, CA: Sage.Maddala, G. S. 1983. Limited-Dependent and Qualitative Variables in Econometrics.

New York: Cambridge University Press.McDonald, John F. and Robert A. Moffitt. 1980. “The Uses of Tobit Analysis.” Review of

Economics and Statistics 62:318-21.Morgan, Byron J. T. 1984. Elements of Simulation. London: Chapman and Hall.Nagin, Daniel S. and Douglas A. Smith. 1990. “Participation in and Frequency of Delinquent

Behavior: A Test for Structural Differences.” Journal of Quantitative Criminology 6:335-56.Nagin, Daniel S. and Richard E. Tremblay. 1999. “Trajectories of Boys’ Physical Aggression,

Opposition, and Hyperactivity on the Path to Physically Violent and Nonviolent JuvenileDelinquency.” Child Development 70:1181-96.

Paternoster, Raymond and Ruth Triplett. 1988. “Disaggregating Self-Reported Delinquencyand Its Implications for Theory.” Criminology 26:591-620.

Reece, William S. 1979. “Charitable Contributions: The New Evidence on HouseholdBehavior.” American Economic Review 69:142-51.

Rhodes, William T. 1986. “A Survival Model With Dependent Competing Events andRight-Hand Censoring: Probation and Parole as an Illustration.” Journal of QuantitativeCriminology 2:113-37.

at ARIZONA STATE UNIV on July 16, 2010smr.sagepub.comDownloaded from

Page 26: Sociological Methods & Research - Arizona State Universitygasweete/crj604/readings/2003-Smith... · 2010. 7. 16. · 366 SOCIOLOGICAL METHODS & RESEARCH public safety problems such

388 SOCIOLOGICAL METHODS & RESEARCH

Rhodes, William T. 1991. “Federal Criminal Sentencing: Some Measurement Issues WithApplication to Pre-Guideline Sentencing Disparity.” Journal of Criminal Law andCriminology 81:1002-33.

Schmidt, Peter and Ann D. Witte. 1984. An Economic Analysis of Crime and Justice. Orlando,FL: Academic Press.

Sellin, Thorsten and Marvin D. Wolfgang. 1964. The Measurement of Delinquency. New York:Wiley.

Tobin, James. 1958. “Estimation of Relationships for Limited Dependent Variables.” Econo-metrica 26:24-36.

Witte, Ann D. 1980. “Estimating the Economic Model of Crime With Individual Data.”Quarterly Journal of Economics 94:57-84.

Wolfgang, Marvin D., Robert Figlio, and Thorsten Sellin. 1972. Delinquency in a BirthCohort. Chicago: University of Chicago Press.

Douglas A. Smith is a professor in the Department of Criminology and Criminal Justiceat the University of Maryland at College Park. His previous research, published ina variety of outlets including American Sociological Review and Criminology, hasfocused on the application of statistical methods for the analysis of categorical, cen-sored, and limited outcome variables.

Robert Brame is an assistant professor in the College of Criminal Justice at the Univer-sity of South Carolina. His current research interests relate to the study of counted out-come variables, missing data, and recidivism of criminal offenders. Recent publicationshave appeared in the Journal of Quantitative Criminology and Sociological Methods &Research.

at ARIZONA STATE UNIV on July 16, 2010smr.sagepub.comDownloaded from