11
Articles Evaluating “Innovative” Programs for Homeless Persons: Case Study of an Unsuccessful Proposal MICHAEL HENNESSY and CHRISTINE GRELLA In January of 1990, the Department of Health and Human Services released a Request for Application (RFA) soliciting applications for research proposals that would implement and evaluate up to 18 “innovative” programs for homeless persons who had alcohol and/or other drug problems. This paper is the story of an unsuccess- ful research proposal that was submitted in response to this RFA. We begin with a brief discussion of the treatment theory of the County of Los Angeles Office of Alco- hol Programs (OAP), the local administrative agency through which the research would have been administered and implemented. The paper then describes the con- tent of the RFA and the elements of the evaluation research design and closes with a discussion of a number of issues including the effects of pre-determining outcome measures. the problems of appropriately evaluating theoretically based programs, and the contradictions between the legitimate needs for scientific generality and the necessity for adherence to program specific theories of service delivery. SOCIAL MODEL RECOVERY PROGRAMS The homeless with alcohol problems constitute a significant portion of the recipients of services funded by the Office of Alcohol Programs (OAP) in Los Angeles County.’ As standing policy the OAP only funds programs that operate using a “Social Model” approach. Social Model recovery programs have their origin in the use of non-medi- cal settings for detoxification which began in the 1960s in contrast to the traditional medical detox program, which relies on inpatient treatment and medication (Sadd and Young, 1987: O’Briant and Lennard, 1973). The Social Model emphasizes the relational and interpersonal nature of alcohol and other drug problems such that the client’s relationship with the surrounding envi-

Evaluating “innovative” programs for homeless persons: Case study of an unsuccessful proposal

Embed Size (px)

Citation preview

Page 1: Evaluating “innovative” programs for homeless persons: Case study of an unsuccessful proposal

Articles

Evaluating “Innovative” Programs for Homeless Persons: Case Study of an Unsuccessful Proposal

MICHAEL HENNESSY and CHRISTINE GRELLA

In January of 1990, the Department of Health and Human Services released a Request for Application (RFA) soliciting applications for research proposals that would implement and evaluate up to 18 “innovative” programs for homeless persons who had alcohol and/or other drug problems. This paper is the story of an unsuccess- ful research proposal that was submitted in response to this RFA. We begin with a brief discussion of the treatment theory of the County of Los Angeles Office of Alco- hol Programs (OAP), the local administrative agency through which the research would have been administered and implemented. The paper then describes the con- tent of the RFA and the elements of the evaluation research design and closes with a discussion of a number of issues including the effects of pre-determining outcome measures. the problems of appropriately evaluating theoretically based programs, and the contradictions between the legitimate needs for scientific generality and the necessity for adherence to program specific theories of service delivery.

SOCIAL MODEL RECOVERY PROGRAMS

The homeless with alcohol problems constitute a significant portion of the recipients of services funded by the Office of Alcohol Programs (OAP) in Los Angeles County.’ As standing policy the OAP only funds programs that operate using a “Social Model” approach. Social Model recovery programs have their origin in the use of non-medi- cal settings for detoxification which began in the 1960s in contrast to the traditional medical detox program, which relies on inpatient treatment and medication (Sadd and Young, 1987: O’Briant and Lennard, 1973).

The Social Model emphasizes the relational and interpersonal nature of alcohol and other drug problems such that the client’s relationship with the surrounding envi-

Page 2: Evaluating “innovative” programs for homeless persons: Case study of an unsuccessful proposal

1 h EVALUATION PRACTICE, 13(l), 1992

ronment is the locus for personal change (Wright and Manov. 1989; Clay et al., 1989;

Wright. Mora and Hughes. 1990). This “relational” view of the etiology of alcohol

problems assumes that clients recovering from alcohol or other drug problems can be

of significant help to each other in maintaining sobriety and locates the authority of

the program in the community defined by treatment status rather than the one defined

by professional training. The responsibility for recovery thus extends beyond the

stage of primary recovery to include the provision of a supportive environment after

both detox and treatment.

As a major programmatic component of the Social Model programs, Alcohol

Free Living Centers (AFLCs) were developed by the OAP to address the need for

long teml and low cost supportive sober housing (Manov and Beshai, 1986). The pri-

mary purpose of the AFLC is the creation of a positive environment that fosters peer-

group expectations and norms that define, promote, and maintain recovery. Thus,

AFLC house rules are oriented toward establishing a sober environment and promot-

ing recovery. and personnel to maintain an AFLC are minimal: typically, AFLCs have

few paid professional staff and no associated formal recovery program (Korenbaum

and Burney, 1987 ).’ As an emergent alternative to standard practice, the Social Model

differs from medic:d and case management (Rubin, 1987) regimes in several ways

(Mameus. 198X) as shown below.

Proc~tic’c’ f>olmli,l ~l~tlic~trl M01/1~1 ct/.\c Molln,~cnlc~rr/ .Yoc,itrl Modrl

Trcatnicnt Orientation Clinical Rc~aml Self-help

Level 0U Iiixtitutionalil~lti~)~~ High Modcratc Lou

Thcrqxtic Unit Clini~an-Patieiit Caw Manager-Client Peer Group-Resdient

Type of Service Provided Treatment Linhapr Recovery

Poxt Primary Residential None Half-uay House AFLC Recover\

Isasi\ ol‘ ALIthit)- C’linicxl Training Rcwurw Network Experiential Knowlerc c Knowledge

Cost Per Participant High High to Moderate LOLI

THE REQUEST FOR APPLICATION (RFA)

The RFA stipulated that all research activities funded were to be handled under regu-

lations concerning “cooperative agreements” between the funding agency and the

agencies or institutions receiving the funds. The purposes of the cooperative agree-

ment were to

. coordinate the collection. compilation. aggregation, and analysis of a

core data set obtained through the use of a battery of instruments com-

mon across sites and thereby enable the preparation of reports that are

national in scope. and

. convene meetings of a working group comprised of the Principal

Investigator from each project and Federal staff for the purpose of

Page 3: Evaluating “innovative” programs for homeless persons: Case study of an unsuccessful proposal

EzJaluating “lmouatizle” Programs for Homeless Persom 17

facilitating evaluation research activities and discussing programmatic issues (Department of Health and Human Services, 1990, p. 5).

In practice, most of the major parameters of the research were pre-determined by RFA stipulations. For example, while true experiments were preferred and quasi- experimental designs were acceptable, all designs had to have a “comparison group” and include both process and outcome evaluation activities. There was also a speci- fied time schedule for collection of quantitative process data on participants, includ- ing service delivery and client characteristic information, and requirements concem- ing the delivery of the data on a quarterly or semi-annual basis to Washington in a standardized format (Department of Health and Human Services, 1990, pp. 8-9).

The outcome evaluation was similarly pre-specified through the use of a core of mandated data collection instruments that were to be administered at least three times during the course of the project: at entry into treatment, at exit of treatment, and six months after exit of treatment. The majority of the mandated instruments were clini- cal in nature. Only one, developed by the New York State Psychiatric Institute, was designed to assess the particular problems of the homeless. These instruments com- prised the set of mandated outcome data that had to be collected and delivered to Washington on a twice yearly basis. Of course, researchers were encouraged to use additional instruments appropriate to their particular population or agency programs as well (Department of Health and Human Services, 1990, pp. 10-l 1).

THE PROCESS OF CLIENT TREATMENT IN SOCIAL MODEL PROGRAMS

In preparing the proposal we worked with the OAP to identify existing Social Model programs that included fully functional AFLCs, assuming that operating programs would stand a better chance of funding in comparison to programs that were in the planning stages. Four agencies were recruited: a program for male recovering alco- holics, one that served Latin0 and Latina populations with drug and alcohol prob- lems, one that served gays and lesbians with drug and alcohol problems, and one that served females only (straight or lesbian).

While each of the participating agencies had a different mix of resources and ser- vices, they all delivered treatment using a two stage service package. The process of client flow in a typical treatment program begins at initial referral to the program and an intake and assessment phase to screen clients (which might result in a possible referral to some alternative programs) followed by treatment at each program’s Pri- mary Recovery Unit (PRU)-a short term facility for detox and recovery. Upon PRU exit, the participant could return to the community or progress to long term residence in an alcohol and drug free AFLC, and ultimate program exit from the AFLC. Diver- sion from the complete Entry-Screening-PRU-AFLC sequence takes three forms: alternative referral, self selection to other programs or services, or dropping out of the treatment program at some point prior to exit from the AFLC.

Page 4: Evaluating “innovative” programs for homeless persons: Case study of an unsuccessful proposal

1x l:VAI.UA’lIC)N I’IZAC-I-ICE, 13 l), 1992

THE EVAI,UA'I‘ION I)ESI(;N

Given the process of treatment dcsct-ibed above and the inlcrc<t in Social Model pro-

(rrarn delivery. we proposed evaluation practice\ that allowed l‘or both the “scientific ? rigor” of expcrimcntal approaches ah well as for method\ to invc\tigate the dynamics

of the program environment (Roberts-Gray and Schcirer. I OXX: Moos. 19X8: Clark.

19X8). The proposed design ~tsed random assignment to AFLC\ or alternative pro-

grams at the point of PRlJ exit. This randomkttion after f>RLI exit (as opposed to a

tnore traditional randomization prior the program entrance) was ;I reflection of the

chaotic nature of the referral proccsx. Potential clienlx cntw ft-om ;I vet-y broad range

of institutional and non-ins;titLttional sourcc~ \LICII ;I\ hospital\. police, f~rogram gradu-

ates. and self rel‘crrals, which prevented any consideration 01‘ r~tndomi/.ation into par-

ticipating agencies or other programs prior to PRll trcalmen~ and also presented

problems in following up “comparison group” client\ rcfct-red out 01‘ programs that

were not part of the research..’

In addition to the experimenk~l analy4\ of‘ AFLC’ mcmber~hif~, and because WC

wcrc uncertain about our ability to actually implement random as\ignmcnt to long

term residences. we also proposal ;I cfLt;tsi-expcrin~c~~~~tl ;tn:tl!~sis ;I\ part ol‘ the data

collection and analysis plan. This consisted 01‘ the collection 01‘ “baxclinc” data on all

participants in the PRLls and AFLC\ 01‘ the f?;“.ticipating ; t I(~cttcic’\ during the start LIP

period before the new services wcrc implet~~et~ted. The client population survey was

therefore ;I CUISLI\ of “untreated” participants at the beginnin, ” of‘ the \tLldy a11d acted

as their own control group as their progrc\\ through the f~rogr;tmm;ttic elements (ot

program drop out) was motiitorcd and as ;I comparison l‘or olhet.4 entering and exiting

later who received exp~uidcd set-vices as part of the research 1‘unditig.

As part ot‘ the procc\s evaluation. ILC proposc’d ( I ) ;I clu;tlilative ethnographic

study ot‘ the process LI~CC~ to screen. a4st3s. ant1 wlect program participants al each

stage of treatment (program entry. PRU, and AF1.C’) ~lcl (3) a11 examination of the

internal pt-ocesscs within AFLC. Since not alI needy pcr\otls C:III he scrvcd. particular

ref’errals may present problems that arc esf~ccially appropriate or inappropriate t’or an

agetlcy,3 or ;I particLiI;ir referral may represent too frcal ;I strain on project resources

even if the person is I’ormally eligible (Hcnnessy. 10X7). prograin ~iciministrator~

invariably dcvclop inf’ormnl and i’ormul “guidelinc4” ltiut arc Lhed l’or hcrcctiitig and

seleclion as well as trcamient planning purf>o\c\ t Scott. 1067: fia~cnt‘eld. 19X3: ALIS-

liti, 19X I ). UnFortLttiately, each stage of thi4 atlmini~trati~~c process reprcscnts 11011-

random selection and act\ as ;I conl‘oundin, ~1 influence to the qu~ititilalive outcome

analysis if not adequately represented in the analysis ( Heri\. 19X.3: Rhodes. IOXS).

The quafilative study of AFLCs was included hccau\c they contrast with the

clinical and case 1lla1~agc1ncnl appr~x~ch to trcatmetit. require lhc mo4t time of the

entire treatment procc4s (LIP to ant‘ J’ear). atid ttieort2ticully do not require intensive.

professional. and expensive support ini‘ra\~ructurcs I’or their m;~in~cn;tnce. However.

few studies of operating AFLCs have been made. Fortuttatelj,. t tlr theoretical per-

spective of the Social Model underlyin, (7 AFLC\ had been clear1). explicated (e.g.,

Borktnan, 19x3) so qualitative study of‘ IIIC dynamic\ ot‘ the operation of AFLCa

Page 5: Evaluating “innovative” programs for homeless persons: Case study of an unsuccessful proposal

Evaluating “lnnouatior” Programs for Homeless Persons 19

could proceed in a directed way. Finally, while we did agree to use the mandated instruments (as well as three other instruments designed specifically for Social Model

settings) at the three data collection points required in the RFA, we stated that we would pre-test all the instruments during the start up phase to identify problems with the application of the primarily clinical instruments to clients in Social Model pro- grams, which are actively non-clinical in focus and orientation. If these pre-tests’ demonstrated serious problems with some items, we proposed that they be altered through a process of collaboration with agency staff.

THE SITE VISIT

After the proposal was submitted, we were notified in the Summer of 1990 that we had made the “first cut” and a site visit team would spend one and one-half days in Los Angeles to visit each of the participating agencies and meet with the proposed evaluation team. On the first day the site team visited each treatment agency, toured the existing AFLCs, and talked with participants and program staff. All agencies appeared prepared and all were obviously well established and already implementing concrete and substantive programs.

Unfortunately, not all elements of the evaluation research plan were received so favorably. In further discussion with the agencies after the proposal was submitted, we discovered that, for most of them, it was unlikely that any feasible randomization process from the PRU to non-AFLC housing alternatives was likely. Partially this was due to the nature of the targeted populations served by three of the four agencies; there just were no comparable long term residence programs operating apart from the one already selected for project participation and ethical (as well as human subject) concerns prevented us from randomly assigning clients “back to the streets” versus long term AFLC residence.

Faced with the possible failure of our ability to randomize, we felt that the quasi- experimental comparisons we had built into the design were adequate given the selec- tion process of all agencies that prevented identifying “comparable” clients in other programs as possible comparison group candidates. However, the site visit team did not agree, feeling that a design with clients acting as their own controls did not define a sufficient quasi-experiment and our rejoinders concerning the severe selection biases resulting from some alternative approach using other populations of homeless persons were dismissed.

The other serious point of disagreement centered around the needs assessment we felt was necessary during the start up period to help program personnel design the new elements of their services (which focused on vocational training components at all the four participating agencies). While we considered this small data collection effort during the first nine months a prudent activity and one that would help to demonstrate the utility of the research component in designing new service compo- nents, the site team saw little need for such formative evaluation work.

Page 6: Evaluating “innovative” programs for homeless persons: Case study of an unsuccessful proposal

20 EVALUATION PRACTICE, 13(l), 1992

THE FINAL RESULT

After all the site visits were completed, the final decision not to fund the project was made in September. The written comments of the review committee-while com- menting favorably on the use of a non-clinical model of recovery, the targeting of special populations, the proposed instrumentation and baseline measurement sched- ule, the process evaluation, and the qualitative and quantitative analysis plans-cen- tered on four points which “reduced enthusiasm” for the project.

First, they noted that the characteristics of the control sites were not well described. In this regard. of course, their hesitancy was justified since we had since discovered (and as the site visit had made perfectly clear) that there were no candi- date programs for the control sites. Next. the committee was concerned about the sample sizes at each of the sites and the possible problems with pooling respondents from different agencies and programs to attain sufficient sample sizes for the detec- tion of any reasonable effect. The committee believed that the delivered treatments were not homogeneous enough (not “sufficiently standardized” in their language) to justify pooling the respondents across projects.

The review committee also objected to the fact that all of the agencies were “largely, although not entirely, already in existence, and a program model (the social model) that is likewise already operating at the intervention sites.” Proposed program changes and outreach activities were considered by the committee as “more on the order of rounding out existing services” rather than representing innovative treatment activities. Finally, the committee objected to any possibility that the mandated instru- ments should be changed “if they do not fit with social model treatment. This may compromise efforts to create a national data set with standard measures. Further a decision has been made to omit biological measures because they are seen as antithet- ical to the values of a social program.”

With the exception of the committee’s first point. which we had tried to antici- pate by including two types of quasi-experimental analyses in the original proposal, we were surprised by their objections. Each of the three final problems that the com- mittee identified-agency heterogeneity, the existence of robust operating programs. and a consideration of the appropriateness of the mandated outcome instruments (including their pre-test during the start up period)-were, from our point of view, strengths of the original design and were built in to the proposal from the start.

Indeed, all three procedures are recommended evaluation practice when new programs are being assessed. First. site heterogeneity is usually seen as a virtue when there is uncertainty as to both the efficacy of programs as well as limited knowledge of their actual operating principles (Hennessy and Hennessy. l990); the lack of knowledge about the content and mechanics of. Social Model programs motivated a large component of the qualitative study we wanted to conduit as part of the evalua- tion design. Next, the selection of operating programs, as opposed to those existing “on paper” only, is a luxury that few evaluators enjoy and the committee’s implicit definition of “new programs” based on chronological age-as opposed to a definition using the amount of scientific scrutiny already cxpcnded as the index of “newness”-

Page 7: Evaluating “innovative” programs for homeless persons: Case study of an unsuccessful proposal

Evaluating “lmovative” Programs for Homeless Rrsons 21

was unanticipated. Finally, the selection and testing of outcome measures appropriate for the content and theory of particular programs is consistent with the current inter- est in theoretically driven evaluations (Chen and Rossi, 1983; Lipsey, 1988; Light et al., 1990; Mark, 1990).6

DISCUSSION

This unsuccessful proposal reflects a number of tensions inherent in the practice of evaluation research. First, there is the usual aspiration on the part of the funding agen- cies to attain as much comparable information as possible about as many programs as feasible, a legitimate attempt to maximize external validity and to increase the effi- ciency of the total research budget. But the quest for maximum coverage of programs and minimum variability in measurement comes at a cost, especially when promising programs are not considered due to their “unstandardized” service delivery systems or their “idiosyncratic” theoretical basis (e.g., House, 1990).

Thus this experience highlights one conflict between the need to collect “compa- rable” data and the development of social service programs that are motivated by treatment or service provision theories that do not match the data collection focus of standardized instruments and may be inconsistent in theoretical bases among them- selves. Of course, as long as no “theoretically based” programs existed, then the call for evaluations based on program theoretic principles could be both favorably noted by theorists of evaluation (Shadish and Epstein, 1987) and safely ignored by practi- tioners. But the development of alternative theories of alcohol and drug abuse treat- ment delivery such as the Social Model makes possible empirical examinations of both program theory as well as the “ideological purity” of program implementation in specific agencies. Unfortunately, such research could also demonstrate that conven- tional outcome measurements might be inappropriate for use in assessing the long term effects of particular programs.

Note however that the desire for standardized instrumentation on the part of the funding agency was not the only restriction placed on the range of evaluation mea- surements. The Social Model itself constrained some aspects of “ideal” data collec- tion, such as blood or breath tests for alcohol consumption as validity checks for self reports. Neither of these measures were proposed for the project since they were not consistent with the philosophy of the Social Model and actually would not have been permitted by OAP had we wanted to use them. Thus, just as the potential of theoreti- cally driven evaluations could be achieved, the imposition of standardized measure- ments (motivated through a desire for scientific comparability) or limits on particular measures (motivated through a desire for theoretical purity) may prevent their implementation.’

Another issue implicitly raised is the necessity to collect comparable data in the first place, a requirement that is motivated through a desire to achieve high levels of external validity. Although it appears to be a cardinal rule of conventional evaluation practice, we question both the necessity for and the practicality of actually collecting

Page 8: Evaluating “innovative” programs for homeless persons: Case study of an unsuccessful proposal

22 EVALUATION I’KACTICE, 13(l), 1992

standardized outcome measures across a broad range of‘ programs. even if the pool of

potential programs are restricted to those assumed to be consistent with approved

measurement technologies. Overlooking the logistics of collecting data that are truly

comparable-including the validity stud& that would be necessary to demonstrate

or define the dimensions of “cornpal-ability” for- both program process, program

implementation. and program outcome measures (c.g.. Abrahams et al., l988)-it is

not clear in this instance how the goal of high external validity is achieved. The “core

data set” to be constructed by DHHS will not approach anything like a representative

(or even quota) sample of clinically oriented treatment programs for the homeless, if

for no other reason than the small sample size. Thus. it is hard to see how reports that

would be “national in scope”s could be produced from any aggregation of research

data collected on participants in these projects. In fact. under conditions of great

uncertainty, limited resources. and small samples. theory based quota sampling of

treatment programs may bc a superior alternative to any conventional random sam-

pling strategy (Louis. 1982. p. 17).

For evaluation researchers. most aspects of this debate are summarized in dis-

cussions of the relative benefits and drawbacks of evaluations designed using multi-

site (Louis, 19X2) and mega-buch “blockbuster” (Weiss. 1984. p. 265) approaches

that emphasize standardization in data collection and research design to maximize

external validityU versus small. “local.” or “tailored” evaluations that de-emphasize

external validity. have a more formative or program design and improvement focus

(Cronbach, 1982), and use “non-standardized” or program specific outcome mea-

sures to assess long term impact.

But the discussion of blockbuster versus tailored evaluations transcends the

practice of evaluation research; it encompasses two differin g views of the capability

of the scientific method and how social science in particular can be applied to the

amelioration of social problems. For cxamplc. the blockbuster idcolofy assumes that

specific research designs and methods can be mandated, appropriate outcome mea-

sures can be identified, and distinctions between acceptable “discretionary method-

ological decisions” (Berk. 1977) and procedurcx constituting “malpractice” (Lipsey.

198X) can be drawn prior to knowledge of program theory or purpose and. indeed,

even before the particular evaluation object is identified. In contrast. the local or ta-

lored approach makes few assumptions about appropriate methods (leaving this up to

the skill of the evaluation practitioner). searches for outcome measures that reflect the

reality of program theory, purpose. or actual functioning (no matter how idiosyncratic

or “unstandardized”). and permits the evaluator great Ilcxibility in responding to pro-

grammatic fluctuations and alterations.

In our opinion. selecting the first strategy over the second reprcscnts a misunder-

standing of the application of the “scientific method” because it confuses the mainte-

nance of scientific “objectivity“ (which is important) with a distancing or abstraction

of the scientific instrumentation used I‘rom the ob.jt‘ct of study. But objectivity is not

the same as abstraction, and to confuse the two yubstitutcs ritual (or in the most

extreme cases. magic) for the activities of‘ scientific inquiry (McCloskey. 1985).

Page 9: Evaluating “innovative” programs for homeless persons: Case study of an unsuccessful proposal

Ezbutiq “Innovative” Propn~s for Homeless Personas 23

More importantly, from a purely practical perspective, choosing -the blockbuster approach ensures that truly innovative programs will never be effectively evaluated.

NOTES

1, During fiscal year 1987-1988 records indicate that there were 9,087 homeless participants in County funded alcohol related service programs. constituting 46.3% of all admissions. Some recent research on the homeless in Los Angeles are the reports of Farr et al. (1988), Koegel et al. (1988), and Koegel and Bumam ( 1987).

2. In fact, such programs would be considered contradictory to the principles of the Social Model by service providers. In addition. such programs instituted within an AFLC would require state licensing as a “treatment facility,” a status that the proponents of the Social Model would not desire.

3. We also could not be assured that informed consent to participate in the project could be attained at the PRU entrance stage since some referrals were still intoxicated upon arriving at the agen-

cies. 4. For example, persons on psychotropic medication are not eligible for Social Model recovery

programs. 5. Since none of our proposed analytic methods received any negative comment, they are not dis-

cussed here. 6, It is difficult to imagine that any standard grant proposal that proposed to utilize instruments

designed for different client populations, did no pre-tests, and categorically refused to alter the measure- ment protocols under any circumstances would ever receive funding.

7. In this respect our experience recalls other similar controversies over the match between pro- gram theory and the identification of “appropriate” evaluation measures such as the Follow Through evaluation conducted by Abt Associates in the 1970s.

8. Although one of the “non-scientific” evaluation criteria used by the funding agency is geo- graphical location (Department of Health and Human Services, 1990, p. l9), we doubt that this defines their operationalization of “national in scope.”

9. Although Louis (1982, p. 13) claims that multi-site studies emphasize the collection of less standardized data, that is certainly not the case here.

REFERENCES

Abrahams, R., Greenberg, J., Gruenberg, L., and Lamb, S. (1988). Reliable assessment data in multisite programs: The social/HMO example. E~~~luution Review, I2. 153-169.

Austin. C. (I 98 I ). Client assessment in context. Soc~iul Work Resear-c,h and Ah.wucts, /7,4-l 2.

Berk, R. (1977). Discretionary methodological decisions in applied research. Sociological Merhods and

Research, 5, 3 17-334.

Berk, R. (1983). An introduction to sample selection bias in sociological data. Amerkun Soc~io/ogicx~/

Re\,iew’. 48, 386-398.

Borkman, T. ( 1983). A Soc,iul-E.~~~er-ientiul Model in Pwgrums fir. Alwholism Reco\lery: A Reseawh

Report on a New T/.eutment Design. Rockville. MD: U.S. Department of Health and Human Services, National Institute on Alcohol Abuse and Alcoholism. DHHS Publication No. 83-1259.

Chen, H. and Rossi, P. (1983). Evaluating with sense: The theory-driven approach. E\uluution Review’,

7,283-302.

Clark, T. (I 988). Documentation as evaluation: Capturing context, process, obstacles, and success. ElIal-

uarion fraUice, 9, 2 I-3 I Cronbach, L. ( 1982). Designing Ew/uufion.s of Educurionul and Social Program.~. San Francisco:

Jossey-Bass.

Page 10: Evaluating “innovative” programs for homeless persons: Case study of an unsuccessful proposal

24 EVALUATION PRACTICE, 13(l), 1992

Department of Health and Human Services. ( IWO). C‘oopvuri~~~ A,~~.c,c,r,rc,,rt.s,fi,/. Rc.sc~rn~lt Dctt~otr,srt~-

riot? Ptujec,t.\ ott Alcoltol (IIK/ Otltcr Urrr,y Ahrr.w Ttrcttrtrcttf fitr Hotttc/c.\.\ Pcr..sott.c. RFA AA-YO-0 I

Washington. DC: Public Health Service.

Fat-r, R., Koegel, P.. and Burnam. A. ( I YX6,. A Srrrc!\ of’Hotttc,/c.s.\tt~~.\.\ utd Mctrrd ///ttc\.\ itt t/w SXid Ron,

Atvtr of ‘Lo.\ Attgch. Loz Angele\: County Department of Mental Health.

Hasenfeld. Y. (19X.1). Huttttrtr Scn~ic~c~ Ot.,~otrt_rttiott.\. Englewood Cliffs. NJ: Prentice-Hall.

Hennessy. C. ( I Y87). Risks and resources: Service allocations decision\ in a consolidated model of long

term care. .lortrt?a/ c?fApp/ic4 Grrottto/o,q~. 6. I 3% 155.

Hennesay. C. and Hennasy. M. (I YYO). ComniLlnit~-ba~cd long-tern1 care t‘or the elderly: Evaluation

practice reconsidered. .hlcdic~ct/ Ctrw Rc,l.ic,rr,. -17. 72 I-159. Howe. E. ( 1990). Methodology and .iusticc. In: K. Sirotnih (Ed.). Elwltttrriott tttd Soc,ict/ ./rr.s/tc~c: /.s.suc.s

itr Public, Edttc,cttiott. pp. 23-K New Directions in Program Evaluation #4S. San Francisco: Jossey-

B&S.

Koegel. P. and Burnam. M. ( 1 YX7). 7’/1c E/:j,rdctttict/c>,q o/‘A/~~~/to/ Ahrrsc~ uttd lkpcttdorc~c~ Antotr,q Hotttcj-

/c.s.s /ttdr~~idtrct/.s: Fitditt~q,\ fiwt~ the /tttrct.-Cit! of’ Lo.\ Att,qc/c.t. Rockvillc. MD: National Institute on

Alcohol Abuse and Alcoholim.

Koegel. P.. Bumam. M.. and Fan. R. (IYXX). The prcvalencc of specific psychiatric disorder\ atnong

homeless individuals in the inner city of Los Angeles. At-c./til,c,.\ of Gc,trcrct/ P.~~c~/titrtt:v. 45. 10X5

1092.

Korenbaum. S. and Bumey. G. ( IYX7). Program planning for alcohol-free living centers. A/c,o/~o/ Hcdth

~117LI Kc.~ctrr~c~ll Cto,?t/. I I, 6X-73.

Lipsey. M. ( 108X). Practice and tnalpractice in evaluation research. E\u/trrtriott Prtr(.tic,c. 0. S-24.

Light, R., Singer, J.. and Willett. J. (IYYO). fIj, /lc.\i,qtr. Canbridge: Harvard University Pre\a.

Loui\, K. ( lYX2). Multisite/multimethod studies. Atwric~trtt Bchtrl~iotul Sc,icttri.st. 26. h-22.

Mark. M. (1090). From program theory to texts of pl-ogaii theory. In: L. Bickman (Ed.). Adl,c~,rc,c~,\ itt

Pto,qrottt Thcor;v, pp. 37-52. San Francisco: Jossey-Bass.

Mameuy. 0. ( 198X). Organization and management of the xocial model recovery program. Paper pre-

sented at University of Calil'omin. San Diego conference on The Future of California‘\ Social Model,

April X-l 0.

McCloskey. D. ( I Y85). 7‘/1(, Rltcrot-ic, of’Ec,o/totttic,.r. Madison: University of Wisconsin Pre\\.

Moos. R. ( 19X8). Assessing the program env iromnent: Implication\ for program evaluation and design.

In: K. Conrad and C. Robert\-Gray (Eds.). E\w/utrrit~,q Pt.o,gruttr Ett~~it~otrttr~trrs. pp. 7-24. San Fran-

cisco: Jossey-Bas\.

O’Briant, R. and Lcnnard. H. ( lY73 ). R[,c,o\,(‘t7,,/t./ttt1 ,A/co/to/i.sttt: A Soc~tctl Ttwtttrcwr Mot/c/. Chicago:

Thomas,

Rhodes, W. ( IYXS). The adequacy of statistically derived prediction instruments in the face of sample

selectivity. Elulrrntiott Rc\aicw,. 9. 369-3X2.

Roberts-Gray. C. and Scheirer, A. ( IOXX). Checking the congruence between a program and its organiza-

tional environment. In: K. Conrad and C. Roberts-Gray (Eds.). Ewlrrtrtity Pro,qtuttt Ett,,it.ottMtCtt/s.

pp. 63-X2. San Francisco: Jossey-Bass.

Ruhin, A. ( lYX7). Cast management. Etlt JY lopcditr of .Sc~ttr/ CZiwX. I. 5%72. Silver Springs. MD:

National Association of Social Workers.

Sadd. S. and Young. D. ( 1087). Nonmedical treatment of indigent alcoholics: A review of recent research

findings. Alc~ol~ol Hctrl~lr uttd Rr.\ctrrc~l~ \4itr/t/. I/, 4X-40.

Scott. R. ( 1967). Selection of clients by social welfare apencics: The ca\e of the blind. Sot ictl Ptrhlcttts.

13.3 1X-357.

Shadish. W. and Epstein. R. ( 1 YX7). Patterns of program evaluation among members or the evaluation

research society and evaluation networh. I_\~ct/rrtrtiott Rcr,icbr,. II. SSS-SW. Department of Housing

and Urban Development. ( 19x4~. A Kc/tort to t//c, Scc~t~ttrr:v ott tlrr Hottw/r.\.r uttd Enro;qrttc~~ Sltc/fct~.~.

Washington. DC: Office of Policy Development and Research.

Weiss. C. ( 19X1). Toward the l’uture of stahcholder approaches in evaluation. In R. Conner. D. Altman,

and C. Jackson (Eds.). Flwlrtttriott Strrclic.\ Rcl,icu Atrttrrctl. pp. 255-38. Beverly Hills. CA: Sage.

Page 11: Evaluating “innovative” programs for homeless persons: Case study of an unsuccessful proposal

Evaluating “Innovative” Programs for Homeless Persons 25

Wright. A. and Manov, B. (1989). The NIAAA initiatives: Strategies for homeless populations: Los Angeles. In: Homelessness, Alcohol, and Other Drugs: Proceedings from a Conference Held in San Diego, CA, February 2-4, 1989. Rockville, MD: U.S. Department of Health and Human Services, Public Health Service, Alcohol, Drug Abuse, and Mental Health Administration, National Institute on Alcohol Abuse and Alcoholism, pp. 34-37.

Wright, A., Mora, J., and Hughes, L. (1990). The sober transitional housing and employment project: Strategies for long term sobriety, employment, and housing. Alcoholism Treatment Quarterly, 7, 47-

56.