14
JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR QUANTITATIVE ANALYSIS JOHN A. NEVIN UNIVERSITY OF NEW HAMPSHIRE Quantitative analysis permits the isolation of invariant relations in the study of behavior. The parameters of these relations can serve as higher-order dependent variables in more extensive analyses. These points are illustrated by reference to quantitative descriptions of performance maintained by concurrent schedules, multiple schedules, and signal- detection procedures. Such quantitative descriptions of empirical data may be derived from mathematical theories, which in turn can lead to novel empirical analyses so long as their terms refer to behavioral and environmental events. Thus, quantitative analysis is an integral aspect of the experimental analysis of behavior. Key words: concurrent schedules, multiple schedules, signal detection, generalized matching, sensitivity, bias, invariance, mathematical theory The first quantitative, parametric analysis of performance on concurrent variable- interval schedules (VI VI) was published by Findley in 1958. The study is most often cited for its procedure: the use of an explicit changeover response to alternate between schedules. Findley varied one or both schedules, and reported the proportions of time spent by his pigeons working on one or the other of the alternatives, in relation to the scheduled interval between reinforcers on each alternative (see his Figures 3 and 4). When these data are reexpressed as ratios of times spent working on the two alternatives and are plotted in relation to the ratios of scheduled reinforcement rates on double- logarithmic axes, the linear functions por- trayed in Figure 1 result. Since 1958, many This paper is dedicated to Don Hake for his many contributions to the experimental analysis of behavior-past, present, and future. Portions of this article were presented at the meeting of the Association for Behavior Analysis, Milwaukee, Wisconsin, May 1982, and at the Conference on Future Directions in the Experimental Analysis of Behavior, Morgantown, West Virginia, April, 1983. Its preparation was sup- ported in part by National Science Foundation Grant BNS 79-24082. I am indebted to many of the par- ticipants in the Conference for comments that in- fluenced its contents, and to W. M. Baum for discus- sion. Reprints may be obtained from John A. Nevin, Department of Psychology, University of New Hamp- shire, Durham, New Hampshire 03824. studies of concurrent performances have reported data in the form of Figure 1, and the linearity of the relation between log ratios of reinforcement rates and times or responses has been confirmed repeatedly with several species and procedural varia- tions (Baum, 1979). This paper will argue that the search for data transformations that reveal simple and general functional rela- tions of this sort, and that lend themselves to mathematical expression, is an essential component of the experimental analysis of behavior now and for the future. Analyses of concurrent performances and related topics will be used to exemplify this argument because of the extensive data base and theoretical literature in that area. Other kinds of experiments and quantitative form- ulations could serve as well for the purpose, and the present emphasis is in no way in- tended to negate their value. ON INVARIANCE The principal business of behavior analy- sis is to isolate and describe regularities in the interaction of behavior and environ- ment. In the first instance, this involves the identification of variables that determine some aspect of behavior in complex situa- tions. Once such a variable has been iden- tified, systematic manipulation often yields 421 1984, 42, 421-434 NUMBER 3 (NOVEMBER)

QUANTITATIVE ANALYSIS JOHN A. NEVIN sis is to isolate and

Embed Size (px)

Citation preview

Page 1: QUANTITATIVE ANALYSIS JOHN A. NEVIN sis is to isolate and

JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR

QUANTITATIVE ANALYSISJOHN A. NEVIN

UNIVERSITY OF NEW HAMPSHIRE

Quantitative analysis permits the isolation of invariant relations in the study of behavior.The parameters of these relations can serve as higher-order dependent variables in moreextensive analyses. These points are illustrated by reference to quantitative descriptions ofperformance maintained by concurrent schedules, multiple schedules, and signal-detection procedures. Such quantitative descriptions of empirical data may be derivedfrom mathematical theories, which in turn can lead to novel empirical analyses so long astheir terms refer to behavioral and environmental events. Thus, quantitative analysis is anintegral aspect of the experimental analysis of behavior.

Key words: concurrent schedules, multiple schedules, signal detection, generalizedmatching, sensitivity, bias, invariance, mathematical theory

The first quantitative, parametric analysisof performance on concurrent variable-interval schedules (VI VI) was published byFindley in 1958. The study is most oftencited for its procedure: the use of an explicitchangeover response to alternate betweenschedules. Findley varied one or bothschedules, and reported the proportions oftime spent by his pigeons working on one orthe other of the alternatives, in relation tothe scheduled interval between reinforcerson each alternative (see his Figures 3 and 4).When these data are reexpressed as ratios oftimes spent working on the two alternativesand are plotted in relation to the ratios ofscheduled reinforcement rates on double-logarithmic axes, the linear functions por-trayed in Figure 1 result. Since 1958, many

This paper is dedicated to Don Hake for his manycontributions to the experimental analysis ofbehavior-past, present, and future. Portions of thisarticle were presented at the meeting of the Associationfor Behavior Analysis, Milwaukee, Wisconsin, May1982, and at the Conference on Future Directions inthe Experimental Analysis of Behavior, Morgantown,West Virginia, April, 1983. Its preparation was sup-ported in part by National Science Foundation GrantBNS 79-24082. I am indebted to many of the par-ticipants in the Conference for comments that in-fluenced its contents, and to W. M. Baum for discus-sion. Reprints may be obtained from John A. Nevin,Department of Psychology, University of New Hamp-shire, Durham, New Hampshire 03824.

studies of concurrent performances havereported data in the form of Figure 1, andthe linearity of the relation between logratios of reinforcement rates and times orresponses has been confirmed repeatedlywith several species and procedural varia-tions (Baum, 1979). This paper will arguethat the search for data transformations thatreveal simple and general functional rela-tions of this sort, and that lend themselves tomathematical expression, is an essentialcomponent of the experimental analysis ofbehavior now and for the future.

Analyses of concurrent performances andrelated topics will be used to exemplify thisargument because of the extensive data baseand theoretical literature in that area. Otherkinds of experiments and quantitative form-ulations could serve as well for the purpose,and the present emphasis is in no way in-tended to negate their value.

ON INVARIANCE

The principal business of behavior analy-sis is to isolate and describe regularities inthe interaction of behavior and environ-ment. In the first instance, this involves theidentification of variables that determinesome aspect of behavior in complex situa-tions. Once such a variable has been iden-tified, systematic manipulation often yields

421

1984, 42, 421-434 NUMBER 3 (NOVEMBER)

Page 2: QUANTITATIVE ANALYSIS JOHN A. NEVIN sis is to isolate and

JOHN A. NEVIN

0

10

E

[X B ir d~~~~~ir 2* ~~~~~~~Bird 5w g f * ~~~~~Bird6

0.1.... .,....0.1 * 1.0 10

Reiniforcement ratioFig. 1. Ratios of times spent responding on two

concurrent VI schedules as a function of the ratios ofscheduled reinforcement rates, for three pigeons inFindley's (1958) switching-key experiment. Both axesare logarithmic.

data with great regularity. If the numericaldata and the variables affecting them can betransformed to provide summary accountsthat are invariant with respect to other fac-tors in the situation, then we have in factachieved effective specifications of bothbehavior and the controlling environment.

Stevens (1951) introduced his argumentfor the centrality of the concept of invariancein science with a quotation from Keyser: "In-variance is changelessness in the midst ofchange, permanence in a world of flux, thepersistence of configurations that remain thesame despite the swirl and stress of countlesshosts of curious transformations" (fromStevens, 1951, p. 19). Behavior is particu-larly beset by the swirl and stress of countlesshosts of variables, and its invariances are tobe cherished.

Invariances may be observed at severallevels. First, qualitative invariances may beidentified in patterns of responding. For ex-ample, cumulative records of performanceon fixed-interval (FI) schedules of reinforce-ment are substantially similar across tem-poral parameters, reinforcers, and species,including human infants (Lowe, Beasty, &

Bentall, 1983). Indeed, this invariance is sopervasive that its failure to occur routinely inadult humans has led to some intriguingstudies. For example, Matthews, Shimoff,Catania, and Sagvolden (1977) attempted toduplicate some aspects of research with non-verbal organisms by omitting the usual in-structions and arranging the equivalent of aconsummatory response for their adult hu-man subjects. However, they obtained onlyinfrequent evidence of scalloping despite ef-fective control by interval (as opposed toratio) contingencies. In a variation of thisapproach, Buskist, Bennett, and Miller(1981) employed food as the reinforcer(rather than the usual points) and varied ex-plicit instructional contingencies across

groups of subjects. They obtained moderatescalloped patterns in some subjects underone of their instructional constraints. Al-though it is clear that much work is neededto identify the determiners of Fl responsepatterns in adult humans, the point here isthat the breakdown of an invariance canserve as an effective occasion for suchresearch.A second level of invariance involves

quantitative relations. With reference to theexample of Fl performance, it is often foundthat the time of transition from pausing toresponding bears a simple proportional rela-tion to the length of the interval when it isvaried within an experiment (e.g., Nevin,1973; Schneider, 1969; but see Lowe, Har-zem, & Spencer, 1979). This, then, is an ex-ample of an invariant relation across intervalvalues within an experiment. Such a resultgains in importance to the extent that asimilar proportional relation describes thedata of systematic replications with otherspecies, responses, and reinforcers. If theconstant of proportionality is itself constantacross experiments, a quite high level of in-variance - parameter invariance - is demon-strated. A yet higher level, the invariance ofa parameter that enters into a number of dif-ferent quantitative relations, has beenachieved in physics (for example, Planck'sconstant and the speed of light), but for thepresent we may have to be content with in-

422

Page 3: QUANTITATIVE ANALYSIS JOHN A. NEVIN sis is to isolate and

QUANTITATIVE ANAL YSIS

variances at the level illustrated in Figure 1.

THE GENERALIZED MATCHING LAW

By inspection, it appears that the func-tions of Figure 1 do not deviate systemati-cally from linearity. Therefore we can de-scribe the data with the equation

B1 = CtA ' (1)B2 V J

or, in the logarithmic form of Figure 1,

log -logog ()+ logc, (2)~B2)2

where B1 and B2 refer to behavior allocationto Alternatives 1 and 2 (measured, in thiscase, as time), and r1 and r2 refer to the alter-native rates of reinforcement. The parame-ter a may be construed as the sensitivity ofbehavior ratios to reinforcement ratios,because reinforcement ratios have progres-sively larger effects on behavior ratios as adeparts increasingly from 0. The parameterc may be construed as bias toward one alter-native or the other, because it capturesdeviations of the behavior ratio from 1.0when reinforcement ratios are equal (Baum,1974). When both a and c are 1.0, Herrns-tein's (1970) well known matching lawresults. For the data of Figure 1, a is approx-imately 0.7, 1.0, and 0.6 for Birds 2, 5, and6, respectively, while c does not depart ap-preciably from 1.0. When a is less than 1.0,the result is termed undermatching, a fairlycommon outcome in studies of this sort.

Several features of Equations 2-knownas the generalized matching law-deservecomment. First, why express behavior andreinforcement terms as log ratios? Theanswers are based on both convenience andlogic. The logarithmic transformation yieldsrough linearity, and it is always easier todeal with linear than curvilinear relationsbecause the function is readily characterizedand systematic departures are easy to detect.Moreover, log ratios range from plus tominus infinity, so that relations betweenthese variables will not be distorted by flooror ceiling effects. Finally, Prelec (1984),

extending on the arguments of Allen (1981,1982) and Houston (1982), showed thatchoice data must conform to Equation 2 iftwo conditions are met. First, responseratios must remain constant when reinforce-ment rates are changed proportionally for allalternatives; and second, ratios of responsesto two alternatives maintained by constantschedules must remain constant when therate of reinforcement for a third alternativeis changed. Failure of the data to conform toEquation 2 suggests that one or both of theseconditions are violated. Given that theseconditions are approximated in most studiesof choice, the logically forced nature of theresult may make it seem uninformative.However, the values of the parameters a andc are not forced, and thus may be infor-mative.

This raises a second issue: In any real ex-periment, how does one know that a and care not both affected by the reinforcementratio itself, or by other confounding factors,varying simultaneously in opposite direc-tions to cancel each other and produce spur-ious linearity? The answer lies in the inter-pretation of the parameters. If c is correctlyidentified as response bias, then varyingsome factor such as the relative effortfulnessof the two responses should affect c but leavea constant. Conversely, varying some prop-erty of the alternative schedules such as theirdiscriminability should affect a but leave cinvariant.

HIGHER-ORDER DEPENDENTVARIABLES

Whatever the outcome of such experi-ments, they illustrate an important role ofthe parameters of mathematical expressionssuch as Equation 2. These parameters notonly describe and summarize a given set ofdata, but also provide higher-order depen-dent variables, derived from data obtainedwithin each of several sets of conditions andpotentially related in an orderly way to somespecifiable aspect of those conditions. Forexample, Todorov, Olveira Castro, Hanna,Bittencourt de Sa, and Barreto (1983) have

423

Page 4: QUANTITATIVE ANALYSIS JOHN A. NEVIN sis is to isolate and

JOHN A. NEVIN

shown that a increases systematically withthe length of training on each pair of a seriesof concurrent VI VI schedules, and decreasessystematically as the subjects are exposed tomore and more pairs of schedules. Miller,Saunders, and Bourland (1980) have shownthat a decreases systematically as the stimulicorrelated with the alternative schedules are

made more similar, and Baum (1982a) hasshown that a increases as the length and ef-fortfulness of travel between alternatives are

increased.A further example of the potential value of

higher-order dependent variables arises inthe study of choice between qualitatively dif-ferent reinforcers. Hursh (1978) studiedmonkeys on concurrent VI schedules thatpermitted them to obtain all their food andwater within the experiment, and found thatthe ratio of responses on two food-producinglevers was related to the ratio of food rein-forcers in much the same way as shown inFigure 1, with a approximately 0.7(estimated from Hursh's Figure 9). How-ever, the ratio of food-lever responses towater-lever responses was inversely relatedto the ratio of food and water reinforcers,with a approximately -10 (estimated fromHursh's Figure 8). Rachlin, Kagel, and Bat-talio (1980) have suggested that a may serve

as an index of the interchangeability or

substitutability of the reinforcers, approach-ing 1.0 when they are completely substitut-able and becoming strongly negative as theyare made increasingly nonsubstitutable.Whether a will serve as an orderly index ofsubstitutability may itself depend on whetherthe data are obtained in a closed or open

economy (for further discussion, see Hursh,1978, and in this issue). Whatever the out-come of these proposals, it will follow fromthe study of the higher-order dependentvariable a (or a related analysis) in futureresearch.

EXTENSIONS TO RELATED AREAS

Multiple SchedulesIn Findley's (1958) experiment, both VI

schedules were available simultaneously,

each correlated with a distinctive stimulus,and the subjects could switch freely betweenthem. As Catania (1966) noted, multipleschedules are similar to concurrent schedulesof this sort except that the experimenter,rather than the subject, controls the alterna-tion between components. Thus, Equation 2might also fit multiple-schedule data.

Reynolds (1963b) used pigeons as subjectsin the first parametric study of multiple VIVI schedules, with schedule componentsalternating every 3 min. He reported rates ofkey pecking in both components for 20 ex-perimental conditions with various rates offood reinforcement arranged independentlyin the two components (see his Figures 1, 2,and 4). When these data are reexpressed asratios of responses, averaged across subjects,and plotted on double-logarithmic axes inrelation to the ratios of scheduled reinforce-ment rates, a linear function results, asshown in Figure 2. A number of subsequentstudies with pigeons as subjects and food asthe reinforcer have confirmed this relation(e.g., Lander & Irwin, 1968; Lobb &Davison, 1977). Data from other species andwith other reinforcers are needed to ensureits generality, but at least provisionally it ap-pears that the generalized matching law maydescribe performance on multiple schedulesas well as that on concurrent schedules.Although the value of a for multiple-

schedule data is typically about 0.3- de-cidedly lower than is typical for concurrentschedules-it is known that a depends on anumber of factors that may be varied in-dependently of the reinforcement ratio. Forexample, a has been shown to depend on thesimilarity of the component stimuli (White,Pipe, & McLean, 1984), as in concurrentschedules (see above). However, study ofthis higher-order dependent variable maysuggest some differences as well as similari-ties between multiple and concurrent sched-ules. For example, Charman and Davison(1983) have shown that a increases system-atically on multiple schedules as food depri-vation is reduced, whereas McSweeney's(1975) data show that response ratios onconcurrent schedules are unaffected by

424

Page 5: QUANTITATIVE ANALYSIS JOHN A. NEVIN sis is to isolate and

QUANTITATIVE ANALYSIS

.0~~~~~~~~

10 - I , '''

.0~~~1.0 .... ,

.10 -. .....

.10 1.0 10Reinforcement ratio

Fig. 2. Ratios of key-peck rates in two componentsof a multiple VI VI schedule as a function of ratios ofscheduled reinforcement rates. The data are geometricmeans for three pigeons, taken from Reynolds(1963b). Both axes are logarithmic.

deprivation, and thus suggest that a is con-

stant. McLean and White (1983) varied therate of alternative reinforcement provided bya third schedule running concurrently witheach of two multiple-schedule components,and found that a for multiple-schedule per-

formance increased when the rate of concur-rent reinforcement increased. By constrast,Davison (1982), Davison and Hunter (1976),and Reynolds (1963a) found that the rate ofreinforcement for a third alternative had no

effect on the relative distribution of re-

sponses between a pair of concurrent sched-ules (as required by Prelec's 1984 argumentsee above). In general, a increases as overalllevels of responding decrease in multipleschedules, but is relatively constant whenoverall levels of responding change in con-

current schedules. This difference may beascribed to the special properties ofsimultaneous availability of concurrentschedules, and thus help us to understandthe role of choice in schedule-controlledbehavior.

Signal DetectionGiven that the generalized matching law

provides a good account of performance inboth multiple and concurrent schedules, it isreasonable to consider its extension to theircombination. As I pointed out some timeago (Nevin, 1969), the conventional yes-no

signal-detection paradigm is such a combi-

nation. In discrete trials, the experimenterpresents one of two stimuli, S1 or S2, as inmultiple schedules. In the presence of eitherstimulus, two responses, B1 and B2, areavailable simultaneously as in concurrentschedules. Reinforcers are arranged for B1given S1 and for B2 given S2. Varying theratio of these reinforcers with the S1-S2 dif-ference constant generates the isosensitivitycurve, whereas varying the S1-S2 differencewith the reinforcement schedules constantgenerates the isobias curve.Davison and Tustin (1978) applied the

generalized matching law to signal-detectionperformance by assuming that the behaviorratio, B1/B2, depends on the ratio of rein-forcers, rl/r2, with S, and S2 acting to biasresponding toward B1 and B2, respectively.The resulting equations are:

On Si trials,log (BlIB2) = a log (r1/r2) + log c + log d. (3a)

On S2 trials,log (BlIB2) = a log (ri/r2) + log c - log d. (3b)

These are just like Equation 2 for concurrentperformance with the addition or subtrac-tion of log d, the stimulus bias term. Theparameter d measures the discriminability ofSi and S2, and may be estimated directlyfrom the data by the expression:

d= B2IS,BIS2 (4)

which is readily derived by subtractingEquation 3b from 3a and taking analogs.Note that the reinforcement terms drop outin the derivation of this expression, so dshould be invariant with respect to reinforce-ment ratios. Therefore, the relation betweenresponse ratios on SI and S2 trials is theisosensitivity curve.A comparable expression for overall

response bias, b, may be derived by addingEquations 3a and 3b and taking analogs.

b = B2JS2 = c(ri/r2)a, (5)B21S1 Bi1S2

0

(0)0

a._

(0)it

425

Page 6: QUANTITATIVE ANALYSIS JOHN A. NEVIN sis is to isolate and

JOHN A. NEVIN

or, log b = a log (ri/r2) + log c. (6)

Note that the stimulus bias term is elimi-nated from Equation 5 and this expression,therefore, is the isobias curve (see McCarthy& Davison, 1981, for full discussion). Itslogarithmic form, Equation 6, states that thecombined behavior ratio-the geometricmean of the behavior ratios on Si and S2trials-will be related to the overall ratio ofreinforcers according to the generalizedmatching law.To illustrate the independence of discrim-

inability (d) and overall bias (b), I reanalyzedsome data reported by Nevin, Olson,Mandell, and Yarensky (1975, Experiment1). Two rats were trained to press one leverin the presence of a dim light (Si) and theother lever in the presence of a bright light(S2). Si and S2 were presented equally oftenin discrete trials, and the probabilities ofwater reinforcement for B1 given Si and forB2 given S2 were varied while the S1-S2 in-tensity difference was constant. The upperleft panel of Figure 3 shows that, as re-quired, d was little affected over a two-log-unit range of obtained reinforcer ratios. In asecond procedure, the S1-S2 difference wasvaried while two constant pairs of reinforce-ment probabilities were arranged. As shownin the upper right panel of Figure 3, d variedsystematically with the S1-S2 difference. Thelower functions show that the relation be-tween b and the obtained reinforcer ratio wasthe same at all stimulus differences explored,again as required by the Davison-Tustin ac-count. These results have considerable gen-erality across species and stimulusmodalities (e.g., see McCarthy & Davison,1980, and summary in Nevin, 1981, Fig. 5).This is the sort of analysis that is needed totest the interpretation of the parameters in amathematical description of behavioral data(see above).

Because of its ability to distinguishstimulus discriminability (d), response bias(c), and reinforcement effects (a), the signal-detection paradigm lends itself to theanalysis of many complex cases. For exam-ple, in social psychology, the analysis of

I0

.0

.10 1.0Reinforcement ratio

10 0 .3 .6 .9S2-SI (ft-ca)

.10 1.0 10 100Reinforcement ratio

Fig. 3. Upper panels: On the left, discriminability(d) as a function of obtained reinforcement ratios whenstimulus differences were constant; on the right, d asfunction of stimulus differences when reinforcementprobabilities were constant in a signal-detection pro-cedure (Nevin et al., 1975, Experiment 1). Lowerpanel: Bias (b) as a function of obtained reinforcementratios for the data in the upper panels. The axes arelogarithmic, and the right-hand set of functions hasbeen displaced downward by one log unit for clarity.

helping may be advanced by regarding anobserver's behavior as controlled jointly bythree separable events (cf. Latane & Darley,1970): Does a person require aid or not (d)?Does the situation make helping easy or dif-ficult (c)? How important are the conse-quences of helping, or of ignoring the situa-tion (a)? There is no reason to expectparameter invariance in such a setting: Dis-crimination of the need for help might verywell depend on the consequences of helpingor ignoring, as in a mutual survival situa-tion. The point is simply that such questionscould not be addressed unambiguously with-out the quantitative, analytic frameworkprovided by Equations 3a and 3b, or similarrelations. (For a review of applications of thesignal-detection analysis in social psychol-ogy, see Martin & Rovira, 1981. For a wide-

426

Page 7: QUANTITATIVE ANALYSIS JOHN A. NEVIN sis is to isolate and

QUANTITATIVE ANALYSIS

ranging review of applications in areas fromrecognition memory to military decisions,see Hutchinson, 1981.)

Virtually all applications outside thebehavior-analytic literature have invokedthe theory of signal detectability advancedby Swets, Tanner, and Birdsall (1961). Ac-cording to this theory, sensory input (x)varies continuously and randomly about one

mean value for the signal, and another for itsabsence. The organism is assumed to estab-lish a decision criterion, c, and a rule: "re-spond B1 if x > c; respond B2 if x < c" suchthat the expected value of each choice ismaximized. Equations 3a and 3b accountwell for exactly the same data without invok-ing hypothetical private, mediating pro-

cesses of this sort. Thus, the systematic ex-

tension of a simple quantitative formulationof performance on concurrent and multipleschedules permits contact with a wide range

of traditional psychological problems, with-out the necessity of adopting a cognitive-process account.

Finally, it should be noted that the signal-detection paradigm is an instance of a condi-tional discrimination: If Si, reinforcement isavailable for B1, and if S2, for B2. This situa-tion is very closely related to matching tosample: If red sample, reinforcement isavailable for pecking the red comparison,and likewise for other colors. In this situa-tion, the sample may be construed as havinga special instructional function, serving as a

selector of discriminations (Cumming &Berryman, 1965). Carter and Werner (1978)conceptualize this selective function as a

"rule" (e.g., if the sample is coded as red,peck the red comparison), although othercases suggest the acquisition of a matchingconcept (e.g., pick the comparison that is thesame as the sample, for all possible samples).The conditional discrimination procedurehas been used extensively by Sidman and hiscolleagues (e.g., Sidman & Tailby, 1982) toestablish arbitrary stimulus-equivalencerelations that may serve as the basis for ex-

tended classes which can in turn enter intomore complex relations, approaching thecomplexity of natural language. Thus, the

experimental analysis of behavior can makequantitative advances into the study of rules,concepts, and language-like relations- areasoften seen as the preserve of cognitive psy-chology- in a way that is continuous withsimple stimulus and schedule control (cf.Marr, Michael, and Shimp, in this issue).

FEEDBACK FUNCTIONS

The foregoing applications of thegeneralized matching law involve descriptivequantification at a level analogous to the gaslaws of classical thermodynamics. The rela-tions between pressure (P), volume (V), andtemperature (7) may be summarized asP V = k- T, where k is a gas-specificparameter. Note that P, V, and Tmay serveeither as independent or as dependentvariables. The law specifies how they mustinteract; it is not causal but descriptive oforderly interdependence. In related fashion,the generalized matching law describes theorderly interdependence of behavior andreinforcement, where neither is necessarilycausal because reinforcers depend on priorbehavior, and behavior is affected by priorreinforcers. By this definition, operantbehavior is part of a continuous feedbacksystem: Changes in behavior lead to changesin the environment, which produce furtherchanges in behavior, and so forth, until astable equilibrium is achieved. The general-ized matching law describes such an equilib-rium state and should be expressed as a rela-tion between responses emitted and rein-forcers obtained at equilibrium. Therefore,a complete understanding of behavior allo-cation must include the feedback process.

Neither Figures 1 nor 2 was based on ob-tained reinforcers, because they were notreported. However, in both cases the ob-tained rates of reinforcement should havebeen close to the scheduled rates. Findley's(1958) VI schedules continued to run after areinforcer was scheduled, so that some delayin obtaining one reinforcer (while respond-ing on the alternative schedule, for example)would not delay the availability of the next.In Reynolds' (1963b) study, response rates

427

Page 8: QUANTITATIVE ANALYSIS JOHN A. NEVIN sis is to isolate and

JOHN A. NEVIN

were generally high enough so that rein-forcers would have been obtained within asecond or so of their availability, whichwould have a negligible impact on overallrate of reinforcement. Thus, there is little ifany error in the reinforcement-rate ratios ofFigures 1 and 2. The overwhelming majorityof later studies have reported obtained rein-forcers, and have confirmed the generalizedmatching law.

It is clear that obtained reinforcers dependboth on the contingencies in force, as deter-mined by the schedule of reinforcement, andon the rates and patterns of responding. Thisdependency may be captured by the feed-back function, first proposed in quantitativeform by Baum (1973). Heyman and Luce(1979), Nevin and Baum (1980), Prelec andHerrnstein (1978), Rachlin (1978), andStaddon and Motheral (1978) have exploredvarious forms of the feedback function for in-terval schedules in simple and concurrentsituations, and Prelec (1982) has provided aformal development of the topic.Empirical feedback functions are typically

based on data obtained through a series ofconditions in which deprivation or alter-native reinforcement is varied (e.g., Nevin& Baum, 1980, Figures 3, 4, 5). Thus, theschedule feedback function may not be rele-vant to stable performance under constantconditions. However, van Syckel (1983) hasshown that behavior actually comes intocontact with the feedback function duringsteady-state performance on both intervaland ratio schedules. He divided each sessioninto short periods, computed local responseand reinforcement rates, and found thateven after extended training there was suffi-cient variation in local response rates toresult in a wide range of local reinforcementrates, and that the covariation of these rateswas orderly. Thus, the feedback function isnot merely a theoretical formality.With the aid of modern computers it is

possible to program arbitrary feedback func-tions interrelating any dimensions ofbehavior and environment that are of in-terest; we are no longer constrained to feed-back relations derived from traditional inter-

val and ratio schedules and their combina-tion. Moreover, the feedback function itselfcan be changed in relation to shifts in behav-ior or the passage of time, as in adjustingschedules of reinforcement. Although it isnot easy to predict the insights into behav-ioral processes that may follow from thesepossibilities, it seems certain that carefulquantitative analyses will be required tomaximize their potential.

THEORETICAL DERIVATION

The achievement of a comprehensivequantitative description and the demonstra-tion of parameter invariance under certainconditions are important advances in anyarea, but they seem not to be an adequatestopping point for many of us. Just as the gaslaws could be derived from statistical ther-modynamics, our quantitative regularitiescan be derived from assumptions about be-havioral processes, stated in mathematicalterms. The advantage of a mathematicaltheory, like a quantitative description, isthat its terms are unambiguous; and "thoughit may be difficult to understand, it will notbe easily misunderstood," as Skinner (1950)remarked of formal representations ofbehavioral data.

In recent years, the matching law for con-current performances has been shown tofollow from the assumption that reinforcersinhibit as well as excite behavior (Catania,1973); from a model relating arousal to rateof reinforcement (Killeen, 1979); from a for-mal statement of Premack's relational princi-ple of reinforcement (Donahoe, 1977); froman analogy to reversible chemical reactionsin the steady state (Staddon, 1977); from akinetic model of switches between alter-natives (Myerson & Miezin, 1980); from themachinery of linear systems analysis(McDowell, 1980); from a momentary max-imizing process that allocates each responseto that alternative with the currently greaterprobability of reinforcement (Hinson &Staddon, 1983; Shimp, 1966; Staddon, Hin-son, & Kram, 1981); from a global maximiz-ing process that allocates responses over time

428

Page 9: QUANTITATIVE ANALYSIS JOHN A. NEVIN sis is to isolate and

QUANTITATIVE ANAL YSIS

so as to achieve the greatest possible totalreinforcement (Baum, 1981; Rachlin, 1978;Staddon & Motheral, 1978); from a melior-izing process whereby choice allocation shiftstoward the alternative with the greater localrate of reinforcement (Herrnstein &Vaughan, 1980); and from difference equa-

tions adapted from the Rescorla-Wagner(1972) model of classical conditioning(Vaughan, 1982).The generalized matching law (Equation

2) has been derived by assuming matchingand then allowing subjective transforma-tions on the values of the alternatives (Allen,1981; see also Allen, 1982; Houston, 1982;Prelec, 1984), and by assuming imperfectsubstitutability between reinforcers within a

global maximizing framework (Rachlin etal., 1980). It has also been derived fromassumptions about maximizing the overallrate of reinforcement under certain schedulecombinations (Houston & McNamara,1981). A close approximation has beenderived from assumptions about the burst-pause structure of behavior, together with a

failure of discrimination between alter-natives (Wearden, 1983). Computer simula-tion of a process based on all-or-noneassociations with a short-term forgettingfunction also generates simulation data thatare well described by the generalized match-ing law (Shimp, 1984, and this issue), andthere may well be other models for matchingor generalized matching that I am not aware

of. It appears, as Hinson and Staddon(1983) recently remarked, that "all roadslead to [generalized] matching."

Interestingly, a similar situation holds forthe extension to signal detection discussedabove. Isosensitivity and isobias curves thatare empirically indistinguishable from Equa-tions 3 and 4 may be derived from classicalThurstone-type signal-detection theory(Green & Swets, 1966); and expressions thatare formally identical to Equations 4 and 5may be derived as the asymptotic outcome ofa linear learning process (Bush, Luce, &Rose, 1964) or from a matching process withstimulus generalization (Nevin, 1981). Weseem to have clear evidence in support of the

proposition that any description of data iscompatible with an indefinitely large num-ber of theories. We can eliminate somepossibilities by careful reasoning andanalysis, but uncounted others remain. Thereal question is whether this sort of exercisein theory development will lead to new in-sights into behavior.

I suggest that the answer to -this questionis yes. For example, Herrnstein's (1970)theoretical account of single-schedule perfor-mance in relation to rate of reinforcementinvoked unmeasured, extraneous reinforcersin order to maintain consistency with thematching law for concurrent performances.Over a decade later, Davison (1982) arrangedconcurrent fixed-ratio schedules as explicitanalogs to extraneous reinforcement, andobtained data that forced him to argue for anew version of the generalized matchinglaw, in which sensitivity depended on thekind of schedule (ratio vs. interval) em-ployed. McLean and White's (1983) study ofmultiple-schedule performances also em-ployed an alternative schedule as an analogto extraneous reinforcement, and showedthat multiple-schedule sensitivity increasedwith the rate of alternative reinforcement,unlike concurrent-schedule sensitivity (seeabove). As another example, Shimp's (1966)exploration of momentary maximizing indiscrete-trial versions of concurrent VI VIled him to employ interdependent schedulesand sequential data analyses not previouslyused in the study of schedule-controlledbehavior. In a detailed theoretical analysis ofoptimal choice, Staddon et al. (1981)distinguished between discrete-trial and con-tinuous choice procedures, and between in-dependent and interdependent schedules inrelation to momentary and molar maximiz-ing, elaborating on Shimp's earlier ap-proach. As a result, they were led to a newformulation of momentary maximizing in"clock space," which in turn permitted theidentification and analysis of "hill-climbing"as a form of optimal choice (Hinson & Stad-don, 1983). A number of other examplescould be cited. Clearly, theoreticaltreatments can generate novel empirical

429

Page 10: QUANTITATIVE ANALYSIS JOHN A. NEVIN sis is to isolate and

JOHN A. NEVIN

analyses so long as the theoretical terms canbe identified with aspects of behavior andenvironment.

CURRENT PROBLEMS ANDFUTURE DEVELOPMENTS

Our field began with qualitative analysesof rates and patterns of responding in rela-tion to contingencies of reinforcement, andthe different effects of interval and ratioschedules were discovered quite early (Skin-ner, 1938). Nevertheless, Herrnstein's (1970)widely accepted law of effect has no place forthis difference, and McDowell's (1980) in-triguing linear-systems approach ignoresresponse-reinforcer contingencies altogether.This is clearly an area in which these andrelated formulations need work (e.g., seeBaum, 1981, and McLean & White, 1983,for alternative ways to accommodate the ef-fects of ratio and interval schedules within amatching-law framework).The different effects of ratio and interval

contingencies can be modeled adequately byjoining their respective feedback functions toassumptions of optimization or minimumdeviation from a baseline condition (e.g.,Baum, 1981; Rachlin, 1978; Staddon, 1979).These feedback models in turn speak to oneof the most fundamental issues in our field:the nature of the reinforcement process.Should reinforcement be construed asstrengthening the response that precedes it(or, more accurately, the class of which thatresponse is a member)? Or should the in-creases in response rate that empiricallydefine the process of reinforcement be con-strued instead as resulting from reallocationof behavior under various constraints (e.g.,conservation-Allison, 1976; value match-ing-Mazur, 1975; or optimization-Rach-lin, Battalio, Kagel, & Green, 1981)? Myown work on behavioral momentum, show-ing that resistance to change is positivelyrelated to the rate of reinforcement, inclinesme toward the traditional strengtheningview (Nevin, 1974, 1979), but the notion ofreallocation under constraint is a powerfulnew alternative.A single model of the basic reinforcement

process and of steady-state performance onvarious schedules would be a major achieve-ment, but in my opinion the best developedof these attempts-Rachlin's optimality ac-count-does not do well in predictingchoices between interval and ratio sched-ules, or between terminal-link schedules inconcurrent chains, which are closely relatedto concurrent schedules (see Baum & Nevin,1981). The recent proliferation of models forconcurrent chained-schedule performance(Davison, 1983; Fantino & Davison, 1983;Killeen, 1982) has led Davison (1984) to callfor a halt in model-building and for renewedefforts at experimental analysis. Interest-ingly, his own analyses depend entirely onthe descriptive use of the generalized match-ing law, and seek to identify conditionsunder which some of its parameters remaininvariant-the analytic level at which thepresent article began.

It appears that we have developed a num-ber of quantitative accounts of schedule per-formance, each of which does well with asubstantial data set, but fails when extendedto some related data. As an example, Herrn-stein's (1970) insight that all action occurs ina context of alternatives led from the match-ing law for performance on concurrent sched-ules to his powerful and widely acceptedstatement of the quantitative law of effect forsingle schedules. However, the extension ofhis formulation to multiple schedules fails ina number of areas (see Williams, 1983, andCharman & Davison, 1983, for discussion),and the generalized matching law may dobetter. However, the latest application of thegeneralized matching law to multiple sched-ules, by McLean and White (1983), requiressome ad hoc assumptions to deal adequatelywith behavioral contrast. My associates' andmy efforts to quantify changes in behaviormaintained by multiple schedules (Nevin,Mandell, & Atak, 1983) have been fairlysuccessful in that area, but we have not beenable to derive major aspects of steady-stateperformance from our formulation of resist-ance to change, anid more work is needed(see Marr, 1984, for a promising start alongthese lines).

430

Page 11: QUANTITATIVE ANALYSIS JOHN A. NEVIN sis is to isolate and

QUANTITATIVE ANAL YSIS

For the future, I look forward to pro-gressive refinements of our quantitative ac-counts of steady-state performance, mergersof models developed in related areas, and ex-tensions to behavioral transitions includingacquisition and extinction. Integration ofmodels developed for operant performance inthe laboratory with those for foraging in thewild (e.g., Baum, 1982b; Fantino, Abarca,& Dunn, 1984; Lea, 1982; Staddon, 1980) isan exciting prospect. We seem poised on thebrink of a major systematic unification, andclearly the quantitative analysis of behaviorhas brought us here.

COMMENT AND EVALUATION

An analysis of the contents of theJournal ofthe Experimental Analysis of Behavior (EAB)since its founding in 1958 reveals a cleartrend in the use of quantitative accounts.Figure 4 presents the proportion of all arti-cles (excluding book reviews and notes onmethodology or apparatus) that make use ofmathematical expressions to describe data,or to derive predictions against which datamay be compared. The trend is monoton-ically increasing and positively accelerated,and seems likely to continue (although it willsurely reach an asymptote well below 1.0).Some experimental analysts will regard

the increasing use of mathematical descrip-tions with dismay. Postexperimental trans-

.4

0i6.2 X

0a.

1960 1970 1980Year

Fig.4. The propotion of all articles in theJournal ofthe Experimental Analysis of Behavior that employ quan-titative descriptions of their data or that compare theirdata against predictions of mathematical accounts, inblocks of three years since the journal was founded.

formation of the data to reveal order seemsto be a different enterprise from laboratorymanipulation of a variable to demonstratecontrol; a parameter of an equation, fitted tosummary data from several experimentalconditions conducted weeks or months apart,seems very remote from actual, ongoing be-havior; and a function based on transformeddata may conceal systematic effects thatwould be evident in the raw data. But this isnothing new in the analysis of behavior: Thedefinition of the operant as a class ofresponses immediately obscures differencesamong instances of the class, and the veryact of counting lever presses or key pecksputs the experimenter at a distance from themoment-to-moment actions of the organism.Moreover, the calculation of response rate orprobability over any sample of time or trialsalso obscures variations in the pattern ofresponding within the sample under consid-eration. The issue is an old one: What is theappropriate level of analysis in the study ofbehavior? We have yet to articulate a betteranswer than that proposed by Skinner in1938: We have found an appropriate levelwhen the results are orderly and repeatable.The examples presented above show orderlyand repeatable relations between responseand reinforcement ratios, and higher-orderdependent variables derived from these rela-tions also behave in an orderly way. Thepossibility that molar relations of this sortmay prove to be derivative from more localprocesses does nothing to dimish their valueas ways to summarize and integrate data.A related concern arises from the prolifer-

ation of mathematical theories in our fieldduring recent years. Many JEAB readerswould be aghast at the appearance of an arti-cle entitled, "Yet another way to derive thematching law," but I believe there is value inthis sort of exercise so long as the theoriesfrom which our quantitative descriptionsmay be derived are themselves expressed interms of behavioral and environmentalevents, as Skinner (1950) urged some timeago. If the parameters of our formulatiomsare few relative to the number of data pointsdescribed, and are clearly interpretable in

431

Page 12: QUANTITATIVE ANALYSIS JOHN A. NEVIN sis is to isolate and

432 JOHN A. NEVIN

relation to experimental variables, quan-titative description and theory become waysof organizing and interrelating behavioralobservations and identifying invariancesthat cut across diverse situations-the lawsof behavior that our field is dedicated tofinding.

REFERENCES

Allen, C. M. (1981). On the exponent in the "gener-alized" matching equation. Journal of the ExperimentalAnalysis of Behavior, 35, 125-127.

Allen, C. M. (1982). Comment on Houston's argu-ments. Journal of the Experimental Analysis of Behavior,38, 113-114.

Allison, J. (1976). Contrast, induction, facilitation,suppression, and conservation. Journal of the Ex-perimental Analysis of Behavior, 25, 185-198.

Baum, W. M. (1973). The correlation-based law ofeffect. Journal of the Experimental Analysis of Behavior,20, 137-153.

Baum, W. M. (1974). On two types of deviationfrom the matching law: Bias and undermatching.Journal of the Experimental Analysis of Behavior, 22,231-242.

Baum, W. M. (1979). Matching, undermatching,and overmatching in studies of choice. Journal of theExperimental Analysis of Behavior, 32, 269-281.

Baum, W. M. (1981). Optimizing and the matchinglaw as accounts of instrumental behavior. Journal ofthe Experimental Analysis of Behavior, 36, 387-403.

Baum, W. M. (1982a). Choice, changeover, andtravel. Journal of the Experimental Analysis of Behavior,38, 35-49.

Baum, W. M. (1982b). Instrumental behavior andforaging in the wild. In M. L. Commons, R. J.Herrnstein, & H. Rachlin (Eds.), Quantitativeanalyses of behavior: Vol. 2. Matching and maximizing ac-counts (pp. 227-240). Cambridge, MA: Ballinger.

Baum, W. M., &Nevin,J. A.(1981). Maximizationtheory: Some empirical problems. Behavioral andBrain Sciences, 4, 389-390.

Bush, R. R., Luce, R. D., & Rose, R. M. (1964).Learning models for psychophysics. In R. C.Atkinson (Ed.), Studies in mathematical psychology (pp.201-217). Stanford, CA: Stanford University Press.

Buskist, W. F., Bennett, R. H., & Miller, H. L., Jr.(1981). Effects of instructional constraints onhuman fixed-interval performance. Journal of the Ex-perimental Analysis of Behavior, 35, 217-225.

Carter, D. E., & Werner, T. J. (1978). Complexlearning and information processing by pigeons: Acritical analysis. Journal of the Experimental Analysis ofBehavior, 29, 565-601.

Catania, A. C. (1966). Concurrent operants. In W.K. Honig (Ed.), Operant behavior: Areas of research andapplication (pp. 213-270). New York: Appleton-Century-Crofts.

Catania, A. C. (1973). Self-inhibiting effects ofreinforcement. Journal of the Experimental Analysis ofBehavior, 19, 517-526.

Charman, L., & Davison, M. (1983). On the effectsof food deprivation and component reinforcer rates

on multiple-schedule performance. Journal of the Ex-perimental Analysis of Behavior, 40, 239-251.

Cumming, W. W., & Berryman, R. (1965). Thecomplex discriminated operant: Studies ofmatching-to-sample and related problems. In D. I.Mostovsky (Ed.), Stimulus generalization. Stanford,CA: Stanford University Press.

Davison, M. (1982). Preference in concurrentvariakle-interval fixed-ratio schedules. Journal of theExperimental Analysis of Behavior, 37, 81-96.

Davison, M. (1983). Bias and sensitivity to rein-forcement in a concurrent-chain schedule. Journal ofthe Experimental Analysis of Behavior, 40, 15-34.

Davison, M. (1984). The analysis of concurrentchain performance. In M. L. Commons, J. E.Mazur, J. A. Nevin, & H. Rachlin (Eds.), Quan-titative analyses of behavior: Vol. 5. Reinforcement values:The effects of delay and intervening events. Cambridge,MA: Ballinger.

Davison, M. C., & Hunter, I. W. (1976). Perfor-mance on variable-interval schedules arrangedsingly and concurrently. Journal of the ExperimentalAnalysis of Behavior, 25, 335-345.

Davison, M. C., & Tustin, R. D. (1978). Therelation between the generalized matching law andsignal-detection theory. Journal of the ExperimentalAnalysis of Behavior, 29, 331-336.

Donahoe, J. W. (1977). Some implications of arelational principle of reinforcement. Journal of theExperimental Analysis of Behavior, 27, 341-350.

Fantino, E., Abarca, N., & Dunn, R. (1984). Thedelay-reduction hypothesis: Extensions to foragingand three-alternative choice. In M. L. Commons,J. E. Mazur, J. A. Nevin, & H. Rachlin(Eds.),Quantitative analyses of behavior: Vol. 5. Rein-forcement values: The effects ofdelay and intervening events.Cambridge, MA: Ballinger.

Fantino, E., & Davison, M. (1983). Choice: Somequantitative relations. Journal of the ExperimentalAnalysis of Behavior, 40, 1-13.

Findley, J. D. (1958). Preference and switchingunder concurrent scheduling. Journal of the Ex-perimental Analysis of Behavior, 1, 123-144

Green, D. M., & Swets, J. A. (1966). Signal detectiontheory and psychophysics. New York: Wiley.

Herrnstein, R. J. (1970). On the law of effect.Journal of the Experimental Analysis of Behavior, 13,243-266.

Herrnstein, R. J., & Vaughan, W., Jr. (1980).Melioration and behavioral allocation. In J. E. R.Staddon (Ed.), Limits to action: The allocation of in-dividual behavior (pp. 143-176). New York:Academic Press.

Heyman, G. M., & Luce, R. D. (1979). Operantmatching is not a logical consequence of maximiz-ing reinforcement rate. Animal Learning & Behavior,7, 133-140.

Hinson,J. M., & Staddon,J. E. R. (1983). Match-ing, maximizing, and hill-climbing. Journal of theExperimental Analysis of Behavior, 40, 321-331.

Houston, A. I. (1982). The case against Allen'sgeneralization of the matching law. Journal of the Ex-perimental Analysis of Behavior, 38, 109-111.

Houston, A. I., & McNamara, J. (1981). How tomaximize reward rate on two variable-intervalparadigms. Journal of the Experimental Analysis ofBehavior, 35, 367-396.

Page 13: QUANTITATIVE ANALYSIS JOHN A. NEVIN sis is to isolate and

QUANTITATIVE ANAL YSIS 433

Hursh, S. R. (1978). The economics of daily con-sumption controlling food- and water-reinforcedresponding. Journal of the Experimental Analysis ofBehavior, 29, 475-491.

Hutchinson, T. P. (1981). A review of some unusualapplications of signal detection theory. Quality andQuantity, 15, 71-98.

Killeen, P. (1979). Arousal: Its genesis, modulation,and extinction. In M. D. Zeiler & P. Harzem(Eds.), Advances in analysis of behaviour: Vol. 1. Rein-forcement and the organization of behaviour (pp. 31-78).Chichester, England: Wiley.

Killeen, P. (1982). Incentive theory: II. Models forchoice. Journal of the Experimental Analysis of Behavior,38, 217-232.

Lander, D. G., & Irwin, R. J. (1968). Multipleschedules: Effects of the distribution of rein-forcements between components on the distributionof responses between components. Journal of the Ex-perimental Analysis of Behavior, 11, 517-524.

Latane, B., & Darley, J. M. (1970). The unresponsivebystander: Why doesn't he help? New York: Appleton-Century-Crofts.

Lea, S. E. G. (1982). The mechanism of optimalityin foraging. In M. L. Commons, R. J. Herrnstein,& H. Rachlin (Eds.), Quantitative analyses of behavior:Vol. 2. Matching and mnaximizing accounts (pp.169-188). Cambridge, MA: Ballinger.

Lobb, B., & Davison, M. C. (1977). Multiple andconcurrent schedule performance: Independencefrom concurrent and successive schedule contexts.Journal of the Experimental Analysis of Behavior, 28,27-39.

Lowe, C. F., Beasty, A., & Bentall, R. P. (1983).The role of verbal behavior in human learning: In-fant performance on fixed-interval schedules. Jour-nal of the Experimental Analysis of Behavior, 39,157-164.

Lowe, C. F., Harzem, P., & Spencer, P. T. (1979).Temporal control of behavior and the power law.Journal of the Experimental Analysis of Behavior,31, 333-343.

Marr, M. J. (1984, May). On quantitative analysis ofbehavior. Paper presented at the meeting of theAssociation for Behavior Analysis, Nashville, TN.

Martin, W. W., & Rovira, M. (1981). Signal detec-tion theory: Its implications for social psychology.Personality and Social Psychology Bulletin, 7, 232-239.

Matthews, B. A., Shimoff, E., Catania, A. C., &Sagvolden, T. (1977). Uninstructed humanresponding: Sensitivity to ratio and interval con-tingencies. Journal of the Experimental Analysis ofBehavior, 27, 453-467.

Mazur, J. E. (1975). The matching law and quanti-fications related to Premack's principle. Journal ofExperimental Psychology: Animal Behavior Processes, 1,374-386.

McCarthy, D., & Davison, M. C. (1980). Indepen-dence of sensitivity to relative reinforcement rateand discriminability in signal detection. Journal ofthe Experimental Analysis of Behavior, 34, 273-284.

McCarthy, D., & Davison, M. (1981). Towards abehavioral theory of bias in signal detection. Percep-tion and Psychophysics, e29, 371-382.

McDowell, J. J (1980). An analytic comparison ofHerrnstein's equations and a multivariate rate

equation. Journal of the Experimental Analysis ofBehavior, 33, 397-408.

McLean, A. P., & White, K. G. (1983). Temporalconstraint on choice: Sensitivity and bias in multi-ple schedules. Journal of the Experinmental Analysis ofBehavior, 39, 405-426.

McSweeney, F. K. (1975). Concurrent scheduleresponding as a function of body weight. AnimalLearning & Behavior, 3, 264-270.

Miller, J. T., Saunders, S. S., & Bourland, G. (1980).The role of stimulus disparity in concurrentlyavailable schedules. Animal Learning & Behavior, 8,635-641.

Myerson, J., & Miezen, F. M. (1980). The kineticsof choice: An operant systems analysis. PsychologicalReview, 87, 160-174.

Nevin, J. A. (1969). Signal detection theory andoperant behavior: A review of David M. Green andJohn A. Swets' Signal Detection Theory and Psycho-physics. Journal of the Experimental Analysis of Behavior,12, 475-480.

Nevin, J. A. (1973). The maintenance of behavior.InJ. A. Nevin&G. S. Reynolds(Eds.), The study ofbehavior (pp. 201-236). Glenview, IL: Scott,Foresman.

Nevin, J. A. (1974). Response strength in multipleschedules. Journal of the Experimental Analysis ofBehavior, 21, 389-408.

Nevin, J. A. (1979). Reinforcement schedules andresponse strength. In M. D. Zeiler & P. Harzem(Eds.), Advances in analysis of behaviour: Vol. 1. Rein-forcement and the organization of behaviour (pp.117-158). Chichester, England: Wiley.

Nevin, J. A. (1981). Psychophysics and reinforce-ment schedules: An integration. In M. L. Com-mons & J. A. Nevin (Eds.), Quantitative analyses ofbehavior: Vol. 1. Discriminative properties of reinforcementschedules (pp. 3-27). Cambridge, MA: Ballinger.

Nevin, J. A., & Baum, W. M. (1980). Feedbackfunctions for variable-interval reinforcement. Jour-nal of the Experimental Analysis of Behavior, 34,207-217.

Nevin, J. A., Mandell, C., & Atak, J. R. (1983).The analysis of behavioral momentum. Journal of theExperimental Analysis of Behavior, 39, 49-59.

Nevin, J. A., Olson, K., Mandell, C., & Yarensky, P.(1975). Differential reinforcement and signaldetection. Journal of the Experimental Analysis ofBehavior, 24, 355-367.

Prelec, D. (1982). Matching, maximizing, and thehyperbolic reinforcement feedback function.Psychological Review, 89, 189-230.

Prelec, D. (1984). The assumptions underlying thegeneralized matching law. Journal of the ExperimentalAnalysis of Behavior, 41, 101-107.

Prelec, D., & Herrnstein, R. J. (1978). Feedbackfunctions for reinforcement: A paradigmatic ex-periment. Animal Learning & Behavior, 6, 18 1-186.

Rachlin, H. (1978). A molar theory of reinforce-ment schedules. Journal of the Experimental Analysis ofBehavior, 30, 345-360.

Rachlin, H., Battalio, R., Kagel, J., & Green, L.(1981). Maximization theory in behavioralpsychology. Behavioral and Brain Sciences, 4, 371-417.(Includes commentary)

Rachlin, H., Kagel, J. H., & Battalio, R. C. (1980).

Page 14: QUANTITATIVE ANALYSIS JOHN A. NEVIN sis is to isolate and

434 JOHN A. NEVIN

Substitutability in time allocation. PsychologicalReview, 87, 355-374.

Rescorla, R. A., & Wagner, A. R. (1972). A theoryof Pavlovian conditioning: Variations in the effec-tiveness of reinforcement and nonreinforcement. InA. W. Black & W. F. Prokasy (Eds.), Classical con-ditioning II: Current research and theoty (pp. 64-99).New York: Appleton-Century-Crofts.

Reynolds, G. S. (1963a). On some determinants ofchoice in pigeons. Journal of the Experimental Analysisof Behavior, 6, 53-59.

Reynolds, G. S. (1963b). Some limitations on be-havioral contrast and induction during successivediscrimination. Journal of the Experimental Analysis ofBehavior, 6, 131-139.

Schneider, B. A. (1969). A two-state analysis offixed-interval responding in the pigeon. Journal ofthe Experimental Analysis of Behavior, 12, 677-687.

Shimp, C. P. (1966). Probabilistically reinforcedchoice behavior in pigeons. Journal ofthe ExperimentalAnalysis of Behavior, 9, 443-455.

Shimp, C. P. (1984). Timing, learning, and forget-ting. In J. Gibbon & L. Allan (Eds.), Timing andtime perception (pp. 346-360). New York: New YorkAcademy of Sciences.

Sidman, M., & Tailby, W. (1982). Conditional dis-crimination vs. matching to sample: An expansionof the testing paradigm. Journal of the ExperimentalAnalysis of Behavior, 37, 5-22.

Skinner, B. F. (1938). The behavior of organisms.New York: Appleton-Century-Crofts.

Skinner, B. F. (1950). Are theories of learningnecessary? Psychological Review, 57, 193-216.

Staddon, J. E. R. (1977). On Herrnstein's equationand related forms. Journal of the Experimental Analysisof Behavior, 28, 163-170.

Staddon, J. E. R. (1979). Operant behavior asadaptation to constraint. Joural of ExperimentalPsychology: General, 108, 48-67.

Staddon, J. E. R. (1980). Optimality analyses ofoperant behavior and their relation to optimal

foraging. InJ. E. R. Staddon (Ed.), Limits to action:The allocation of individual behavior (pp. 101-141).New York: Academic Press.

Staddon, J. E. R., Hinson, J. M., & Kram, R.(1981). Optimal choice. Journal of the ExperimentalAnalysis of Behavior, 35, 397-412.

Staddon, J. E. R., & Motheral, S. (1978). Onmatching and maximizing in operant choice ex-periments. Psychological Review, 85, 436-444.

Stevens, S. S. (1951). Mathematics, measurement,and psychophysics. In S. S. Stevens (Ed.), Hand-book of experimental psychology (pp. 1-49). New York:Wiley.

Swets, J. A., Tanner, W. P., & Birdsall, T. G.(1961). Decision processes in perception.Psychological Reviev, 68, 301-340.

Todorov, J. C., Oliveira Castro, J. M., Hanna, E. S.,Bittencourt de Sa, M. C. N., & Barreto, M. Q.(1983). Choice, experience, and the generalizedmatching law. Journal of the Experimental Analysis ofBehavior, 40, 99-111.

van Syckel, J. (1983). Variable-interval and variabk-ratiofredback functions. Unpublished master's thesis,University of New Hampshire.

Vaughan, W., Jr. (1982). Choice and the Rescorla-Wagner model. In M. L. Commons, R. J. Herrn-stein, & H. Rachlin (Eds.), Quantitative analyses ofbehavior: Vol. 2. Matching and maximizing accounts (pp.263-279). Cambridge, MA: Ballinger.

Wearden, J. H. (1983). Undermatching and over-matching as deviations from the matching law.Journal of the Experimental Analysis of Behavior, 40,333-340.

White, K. G., Pipe, M., & McLean, A. P. (1984).Stimulus and reinforcer relativity in multipleschedules: Local and dimensional effects on sen-sitivity to reinforcement. Journal of the ExperimentalAnalysis of Behavior, 41, 69-81.

Williams, B. A. (1983). Another look at contrast inmultiple schedules. Journal ofthe Experimental Analysisof Behavior, 39, 345-384.