44
1 Date modified: October 28, 2013 UNIT 1 Operant Conditioning : a process by which behavior is modified by its consequences in the environment. Consequences that increase the probability that a response will be repeated are called reinforcers . Consequences that decrease the probability that a response will be repeated are called punishers . Prototype: Rat in a Skinner Box Operant Response Change in Environment Consequence Rat presses down on bar. Bar goes down a specified distance. Food pellet is presented. Result: The rat presses the bar at a higher rate (rate = the number of responses in a period of time, e.g., responses per minute). This increase in rate shows that the food pellet acted as a ________??? The term “operant” means that a response operates on and changes the environment. The environment, in turn, changes the behavior. Therefore, there is an interaction between behavior and environment. Basic Issue in Behavior Theory: How Do We Define a Response? The term operant refers to a class of actions that all have the same effect on the environment. Psych 302

psych.fullerton.edupsych.fullerton.edu/navarick/302notes.doc · Web viewWord fragment completion: subject is given some letters of a word while other letters are left blank, and subject

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

1

Date modified: October 28, 2013

UNIT 1

Operant Conditioning: a process by which behavior is modified by its consequences in the environment.

Consequences that increase the probability that a response will be repeated are called reinforcers.Consequences that decrease the probability that a response will be repeated are called punishers.

Prototype: Rat in a Skinner Box

Operant Response Change in Environment Consequence Rat presses down on bar. Bar goes down a specified distance. Food pellet is presented.

Result: The rat presses the bar at a higher rate (rate = the number of responses in a period of time, e.g., responses per minute).

This increase in rate shows that the food pellet acted as a ________???

The term “operant” means that a response operates on and changes the environment. The environment, in turn, changes the behavior. Therefore, there is an interaction between behavior and environment.

Basic Issue in Behavior Theory: How Do We Define a Response?

The term operant refers to a class of actions that all have the same effect on the environment.It doesn’t matter if the rat presses the bar with the left paw, the right paw, or both paws; they’re all the same response. Historically, theorists have disagreed on how a response shoule be defined.

Molecular_________________________________________________________ Molar Guthrie Hull Skinner Tolman

Key point: reinforcers are defined by their effects on behavior, not by your intentions or how you think the subject feels.

Note: don’t use the term “reward.” It has no definition that is the same across situations.

Psych 302 Class Notes

2

Additional point on reinforcers: A response does not have to cause the reinforcer to occur to be affected by it; the reinforcer just has to occur right after the response. Superstitious behavior: behavior that is maintained by accidental reinforcement (reinforcement that the behavior does not actually produce).

Skinner (1948) showed this in pigeons by activating the grain hopper every 15 seconds regardless of what the bird was doing. In 6 of 8 cases, clearly defined responses were established (2 observers agreed on the counting of these responses 100% of the time.)

Examples:

Bird turned counter-clockwise around the cage 2 or 3 times between reinforcements.

Bird repeatedly stuck his head into one of the upper corners of the chamber.

“Two birds developed a pendulum motion of the head and body, in which the head was extended forward and swung from right to left with a sharp movement followed by a somewhat slower return. The body generally followed the movement and a few steps might be taken when it was extensive.”

Usually, a response was repeated 5 – 6 times between reinforcements.

Superstitions were most likely to develop with short intervals between reinforcements. Once the superstition was established, the interval could be gradually lengthened to 1 minute.

Relation of Operant Conditioning to Classical (Pavlovian) Conditioning

Classical conditioning is a process in which an association is formed between two stimuli that are presented one after the other. A response is learned through this process even though it has no consequences in the environment.

Repeatedly present: BELL FOOD

At first, the dog salivates only when food is placed in his mouth. Eventually, he also salivates when the bell is rung.

He gets the food whether or not he salivates when the bell is rung. This learned response has no effect on the presentation of the food, ie, it has no consequences in the environment.

Classical conditioning is based on reflexes: a simple, involuntary response to a stimulus (S R). Reflexes are not sensitive to their consequences.

Operant conditioning is based on operant responses, which are sensitive to their consequences

3

(R S).

When explaining operant behavior, we emphasize stimuli that come after the behavior (consequences).

Law of Effect: behavior changes as the result of its consequences in the environment.

However, we also have to consider stimuli that come before the behavior (antecedents of the behavior). So the basic framework for analyzing behavior is as follows:

The ABC s of Behavior Analysis

Antecedent stimulus Behavior Consequence

Discriminative stimulus: a stimulus in the presence of which a response MAY be reinforced.

Delta stimulus: a stimulus in the presence of which a response is never reinforced.

Stimulus discrimination: a situation in which a response is more likely to occur in the presence of one stimulus than in the presence of another stimulus.

Stimulus discrimination is produced by repeatedly alternating between a discriminative stimulus and a delta stimulus or by repeatedly alternating between two discriminative stimuli.

Stimulus control: control of a response by a discriminative stimulus.

Relationship Between Operant Conditioning and Instrumental Conditioning

Operant and instrumental conditioning refer to the same learning process but these terms represent very different research traditions, eg, types of apparatus, types of procedures used, and ways of explaining behavior. See text, page 91 for a detailed comparison.

Operant InstrumentalApparatus Skinner box Mazes, runways

Procedures Continuous access to Discrete Trials response (“free-operant procedure”).

Explanations Descriptions of Intervening variables variables in environment (theoretical constructs) (“functional analysis”).

4

Experimental Designs: Single-subject designs Group-statistical designs

Schedule of reinforcement: a rule that indicates when a response will be reinforced.

Basic Schedules Continuous Reinforcemement Partial Reinforcement Extinction (response reinforced every time) (response reinforced some of the time) (response never reinforced) FR VR FI VI FT VT (see docs on 302.html page) (FT and VT give reinforcement on a time schedule independently of response)

Fixed time: the interval between reinforcements is always the same.Variable time: the interval changes randomly between reinforcements.

Both schedules tend to produce superstitious behavior.

Note: there is a big difference between fixed-interval schedules and fixed-time schedules,and between variable-interval schedule and variable-time schedules so be sure to keep the difference in mind when writing about FI and VI in your paper.

Each schedule, Continuous Reinforcement (CRF) FR VR, FI, VI, produces a typical rate and pattern of responding. (Do not copy or paraphrase the following descriptions in your paper. Wait a few minutes after reading and then summarize what you remember; then check back to see if it’s accurate and complete. Apply these descriptions to your examples of the schedules.)

CRF: the subject responds at a steady, moderate pace but soon slows and then stops because the reinforcer gradually loses its effectiveness due to satiation.

Satiation: a gradual decrease in the ability of a reinforcer to maintain behavior (i.e., keep it going) as the subject consumes more and more of it (uses it in some way, like eating it, drinking it, or interacting with it by observing or manipulating it, as with a video game). This process is shown by a gradual decrease in the rate of responding.

FR: after reinforcement the subject responds very rapidly and steadily until the next reinforcement. If the response requirement is relatively high (e.g., FR 50 rather than FR 5), there may be a short pause after reinforcement and then an abrupt shift to rapid, steady responding (post-reinforcement pause).

VR: same as FR but without any pauses after reinforcement, so overall it is a more powerful schedule. Less time is wasted pausing.

A = Term (Extinction)

B = Definition (decrease in responding when the response is no longer reinforced)

C = Example (rat stops bar pressing when it doesn’t produce food; child stops yelling when it doesn’t produce attention)

5

FI: after reinforcement there is no responding. Then, as time passes, there are more and more responses, and towards the end of the interval the subject responds at a very high, steady rate. After reinforcement, the pattern is repeated. On a cumulative record type of graph (described in online Powerpoint) the FI schedule produces a series of scallops (representing positive acceleration within each interval), so the pattern is known as the “FI scallop.”

VI: steady responding at a moderate rate. The pattern resembles the one on VR but it’s a lower rate because on VR the faster the subject responds, the faster it gets its next reinforcement, whereas on VI the subject cannot bring the next reinforcement any faster by responding faster.

Major Complex Schedules Concurrent Schedules Multiple Schedules Chained Schedules Avoidance Conditioning Signaled or Unsignaled (eg Catania and Cutts, (eg behavioral (used to create complex (see PPT) (used in Concurrent VI EXT) momentum studies, response sequences) Rescorla’s such as classical Multiple VI, VI+VT) cond. exp.)

Concurrent schedules: two or more schedules operate at the same time and the subject can obtain reinforcement from both schedules. For example, concurrent VI 15 seconds, VI 60 seconds. Subject makes responses to both schedules, but 4 times as many responses to VI 15 seconds than to VI 60 seconds because VI 15 gives reinforcement 4 times as often as VI 30 (4 per minute vs 1 per minute). This match between the ratio of responses and the ratio of reinforcements is called the Matching Law.

Problem : Concurrent schedules usually produce superstitious behavior that prevents subjects from showing the matching relation between reinforcement ratios and response ratios. For example, the rat presses left a few times then right and gets reinforced on the right. Some extra responses on the left may occur because they are superstitious; they happened to occur just before reinforcement on the right.

Catania & Cutts used concurrent schedules to study human superstition…

Counter

VI ExtinctionEXT

Counter

6

College students could press either button to get as many points as they could. One button gave points on VI and the other gave no points (extinction). The students often pressed the EXTINCTION button even though it got them no points. This was superstitious behavior. It happened because sometimes they would press right a few times, then press left and get a point. Not only the responses on the left were reinforced, but also the responses on the right plus the response of changing over between buttons. They kept switching back and forth and making a lot of unnecessary responses on the right.

This is the kind of thing that happens with rats and pigeons. A procedure was developed for these species to eliminate superstitions:

Changeover delay (COD): insures a minimum delay interval between the last response on one button and reinforcement for pressing the other button.

For example, instead of going:R-R-R-R-L-Point (if VI was ready to give a point)it went:R-R-R-R-L-NO POINT-DELAY (such as 10 seconds)-L-Pointso there was a delay between the last R response and getting a point. This eliminated the superstitious behavior. It has the same effect with rats and pigeons.

In interviews, the students did not accurately describe the causes of their behavior. In the first part they said they had to press RIGHT a few times then LEFT but it didn’t always work. This was wrong. They didn’t have to press RIGHT at all.

In the second part they said the RIGHT button didn’t do anything but it did. It delayed their next point for pressing the left button. So it was the contingencies that controlled their behavior, not their “beliefs”. They could not describe the real reasons for their behavior.

Multiple schedules: a two or more simpler schedules are presented one after the other, each of which is correlated with a distinctive stimulus. For example, crosswalk at Nutwood and Commonwealth: mult CRF (no light) EXT (red light)

Mixed schedules: same as multiple but all schedules are presented with the same stimulus.Most crosswalks don’t have stimulus cues. It’s a mixed schedule so you have to press unless you saw someone else press.

Multiple Schedules are used to study…

Behavioral momentum: the tendency of behavior to persist when conditions are imposed that normally cause it to slow down or stop.

7

Examples of these obstacles to responding are: extinction, satiation, punishment, and distracting stimuli (alternative sources of reinforcement, like hearing a TV program while writing a paper).

In Physics:Velocity x mass = momentum mass is something that gives an object weight.

In Behavior Analysis:Rate of response x “mass” = behavioral momentum

“Mass” is built up by increasing the rate of reinforcement in the presence of a stimulus.

Behavioral momentum is studied with multiple schedules of reinforcement: two or more simpler schedules are presented, one after the other, and while each schedule is in operation a different stimulus is present. Eg., mult CRF (GREEN) VI 1 minute (ORANGE).

After awhile on this mult schedule, what happens if you switch to extinction in both components?

You don’t get the usual partial reinforcement effect (where resistance to extinction—the number of responses before stopping— is greater after partial reinforcement than after continuous reinforcement).

In multiple schedules you get a reversed partial reinforcement effect: resistance to extinction is greater after continuous reinforcement CRF (GREEN) than after partial reinforcementVI 1 minute (ORANGE).

Higher resistance to extinction (more responses before stopping) means more behavioral momentum, which depends on “mass.” Mass depends on rate of reinforcement, and continuous reinforcement gives a higher rate of reinforcement than partial reinforcement.

See discussion of behavioral momentum at:http://psych.fullerton.edu/navarick/bm.doc

The original is article is at:http://psych.fullerton.edu/navarick/bm.pdf

——————

Response chain: a set of responses that must be performed in a certain order to obtain reinforcement.

8

Each response produces a stimulus that produces the next response, and so on, until the end of the chain.

In laboratory situations, the final stimulus is a primary reinforcer. Each of the other stimuli is a secondary reinforcer for the response that produced it and a discriminative stimulus for the response that follows it.

R S R S R S Video of 3-Response Chain: http://www.youtube.com/watch?v=XpbBgxvVJeM

Response 1 Response 2 Tone on -> Rat deposits marble on platform -> Tone off -> Pulls rod -> S R S R Light on Response 3above bar -> Presses bar -> Food S R S

Each stimulus is a secondary reinforcer for the preceding response and a discriminative stimulus for the next response:

Light on above bar Pulls rod S Presses bar

————————

Avoidance behavior: behavior that prevents the occurrence of a punishing event.

Signaled avoidance: Warning signal ……………….no response…………………………..punishment or Warning signal………………..response………………………………..no punishment

Bee example: Warning signal = buzzing around ear. Punishment = sting.

See Powerpoint online for more info on signaled avoidance.

Secondary Discriminativereinforcer for stimulus for Secondary

Discriminativereinforcer for stimulus for

9

Unsignaled avoidance: …………………..punishment…………………………..punishment…………………………..punishment…………………………..punishment or response………………………………..no punishment………………………… response………………………………..no punishment…..

Faucet example: keep pressing knob to prevent water from shutting off.

More on this in discussion of Rescorla’s experiment on contingencies in classical conditioning.

Basic Operant Terms (Don’t copy or paraphrase any of these definitions in your paper. Describe all concepts in your own words.)

Primary reinforcer: a stimulus that reinforces behavior naturally and requires no training to become a reinforcer (e.g., food, water, pain).

Establishing operation (EO): a procedure that influences the effectiveness of a consequence as a reinforcer or punisher (e.g. food deprivation makes food a more effective reinforcer).

Secondary reinforcer: a stimulus that becomes a reinforcer by being paired with another reinforcer (either a primary or secondary reinforcer).

Extinction: withholding reinforcement after the response occurs; extinction results in a gradual slowing down of the response and finally the response stops. (Note: this is not the same as negative punishment.)

Negative punishment has the following pattern:Positive reinforcer present -> Response occurs -> positive reinforcer removed

Extinction has the followng pattern:Response occurs -> no reinforcer

Schedule of reinforcement: the contingency between a response and a reinforcer; a rule that specifies the conditions under which a response will be reinforced.

Satiation: repeatedly consuming a reinforcer temporarily reduces its ability to maintain behavior (as shown by a decrease in response rate).

Continuous reinforcement: a schedule in which a response is reinforced every time it occurs. Produces steady responding at a moderate rate but the responding quickly slows down due to satiation.

10

Partial reinforcement: a schedule in which some but not all occurrences of the response are reinforced. Results in slower satiation because there is more time (and more responses) between reinforcers.

The basic schedules of partial reinforcement are fixed ratio (FR), variable ratio (VR), fixed interval (FI) , variable interval (VI). Each schedule has a distinctive effect on the rate and pattern of responding (“pattern” refers to whether the rate of responding is constant or changes during the period leading up to the next reinforceement).

Fixed Time (FT) and Variable Time (VT) give reinforcement after an interval has elapsed regardless of responding. A response is not required to produce the reinforcement. These schedules result in a decrease in response rate. _________________

Reflex: a relatively simple, involuntary response to a stimulus.It consists of 3 stages:

1. Sensory: impulses go from sense organ to sensory nerve to central nervous system (CNS: spinal cord and brain).2. Interneuron: impulses travel through network of neurons in spinal cord or brain.3. Motor: impulses go from interneuron to motor nerve, which takes them to a muscle, gland, or organ.

Rene Descartes: 17th century French philospher and mathematican who originated the concept of the reflex and made it the basis for a theory of human behavior. Psychological determinism: every human action has a cause that can be discovered and explained. Descartes’ Dualism: there are two realities, the mental and the physical, and they function in fundamentally different ways…

Reflexes are a bodily function, a part of the physical world, and represent a portion of human behavior that is determined and involuntary. Reflexes are produced by stimuli in the environment.

Patellar (Knee-Jerk) Reflex

11

These reactions were said to be inborn.

Most human behavior is voluntary: It begins in the mind, not the environment. The principles of mental functioning may never been be understood.

Therefore, according to Descartes, any behavior that you observe that has been learned is being performed voluntarily and is the result of free will.How the Concept of the Reflex Influenced the Early History of Psychology

Physiologist Ivan Pavlov showed that reflexes could be learned, for example,

BELL -> FOOD. Present these stimuli, one after the other, over and over, and eventually the dog salivates when you ring the bell much he salivates when you put food in his mouth.Note that the dog gets the food whether or not he salivates when you ring the bell. Salivating in response to the bell has no consequences in terms of changing what happens in the situation.

Pavlov’s experiments suggested that Descartes’ concept of the reflex could be extended to all behavior.

Interpreting behavior in terms of reflexes is a form of environmental determinism, and it became the basis for the original version of Behaviorism as proposed by its founder, John B. Watson.

B. F. Skinner modified Watson’s approach by emphasizing the role that consequences play in determining behavior (the law of effect). Watson denied that there was such a thing as reward and punishment (it just looks that way) because in Pavlovian (Classical) Conditioning the behavior that subjects learn has no consequences.

Theories About What is Learned in Classical Conditioning

Stimulus-Response Theory (S-R): CS is associated with UR because CS occurs shortly before the UR.

CS US UR leads to CS CR

Pro: CR usually resembles URCon: CR usually not identical to UR and often very different, e.g. drug tolerance studies. Also, studies of sensory preconditioning and US devaluation (below).__________________________________________________

Stimulus-Stimulus Theory (S-S): Association forms between mentalrepresentations of the CS and US.

Pro: Studies of sensory preconditioning and US devaluation.

12

Sensory preconditioning: light tone, then tone food, gives light salivation. Light was never paired with a UR as required by S-R theory.

US devaluation: tone food, then food poison, reduces CR to tone. S-R theory says CR to tone should not change because tone was never followed by a weaker UR.Con: Does not always predict what response will be made; does not say how knowledge is translated into behavior.

_____________________________________________________Preparatory Response Theory (R-S): CR makes US either more rewarding (e.g. food) or less punishing (e.g. shock). Pro: salivary CR makes food more palatable.Con: giving extra reward for CR has been found to reduce CR.

____________________________________________________

No theory explains all the cases but each explains some, so we need all of them for a comprehensive theory. S-R and R-S help to explain how mental S-S associations get translated into behavior. ________________________

Also supporting S-S theory is experiments that show that a CS must be a useful source of information about whether or not the US will occur in order for the CS to be associated with the US.

Even if a stimulus precedes the US, and this happens over and over, it may not be associated with the US if it is a redundant stimulus, i.e., it adds no new information about the occurrence of the US. This is shown by experiments on blocking.

Compound CSs: CSs that consist of two or more elements.

BLOCKING: a situation in which a subject fails to learn an association between one element of a compound CS and a US because of prior learning of an association between the other element of the compound CS and the US.

13

Stages of training

1 2 3 (Test for blocking)

EXP Pair light with food Light + Tone Tone presented paired with food by itself a number of times

CONTROL: No training Light + Tone Tone presented paired with food by itself a number of times

Results: Control group salivates in response to the tone but Experimental group doesn’t because for these subjects the light already predicts that food will occur (Stage 1 carries over to Stage 2) and the tone is redundant. So the light blocks the formation of an association between tone and food.

For an association to form between the CS and US, the subject must be “surprised” by the occurrence of the US. Surprise is measured by the amount remaining to be learned about whether or not the US will occur. This concept is the basis of the Rescorla-Wagner model, the prevailing framework for research on classical conditioning.

………………………………………………………………………………………………………

UNIT 2

Overview: Two Theoretical Traditions in Research on Verbal Learning and Memory

Verbal learning: focus is on rate of learning (# correct responses from trial to trial, number of trials to reach a criterion of mastery like 1 recitation of list without an error)

Memory: focus is on measures of retention as times passes after original learning (retention interval)

Both process are often studied in the same experiment.

Associative Tradition

1. Originated with Ebbinghaus in the 1880s.2. Applies principles of association proposed by Aristotle and British Empiricist philosophers. 3. Basic principles involve temporal proximity between stimuli and repetition of practice trials.4. KEY POINT: ASSOCIATIONS FORM AUTOMATICALLY BASED ON THESE TASK VARIABLES.

14

4. Often applies principles and concepts from classical and operant conditioning, which are associationistic in origin.5. Methodology involves presenting lists of items to be memorized. It’s basically research on list-learning.6. Items are often nonsense syllables to minimize effects of previous learning.

Cognitive Tradition 1. Originated with research in the 1930s by Frederic Bartlett on memory for paragraphs.2. Emphasis is on naturalistic materials and effects of previously acquired knowledge.3. Often studies a person’s strategies for mastering a task. ASSUMES LEARNING CANNOT BE EXPLAINED JUST IN TERMS OF THE TASK VARIABLES. Learner modifies conditions of practice.4. Learning involves forming mental representations of stimuli and seeing how they’re related. Strategies for learning involve perceiving or inventing relationships between these representations.

See online document on strategies for serial learning:http://psych.fullerton.edu/navarick/serstrat.doc

Additional Mnemonics for Serial Learning pegword method of loci (locations) Suppose you have a serial list beginning: sunglasses, horse, penny.

Pegword mnemonic: First, you have to memorize a rhyme, “One is a bun, two is a shoe, three is a tree...”

The nouns are pegwords. Easy to visualize. You create a mental picture that shows the pegword and list word (i.e., the referents, the objects referred to) interacting. Probably a bizarre image.

One is a bun Two is a shoe Three is a tree Four is a door Five is a hive Six is sticks Seven is heaven Eight is a gate

One is a bun

Two is a shoe

Three is a tree

Four is a door

Five is a hive

Six is sticks

15

One is a bun: picture a hamburger bun wearing sunglasses. Two is a shoe: picture a horse wearing athletic shoes. Three is a tree: picture a tree on which the leaves are pennies.

During recall, recite the rhyme. Pegwords will trigger images of list words.

Method of loci: take a “mental walk” through a familiar area, noting landmarks, like entering the Humanities Building. First, you see the glass doors. Then you see the elevator doors. Then you see the inside of the elevators. Picture each location interacting with alist word, keeping the order of locations the same as the order of list words.

Glass doors: the handles are sunglasses.Elevator doors: picture a horse standing in front of the door.Inside the elevator: the doors open and you see a mountain of pennies!

Bizarre images help memory as long as they are not overused, in which casethey become ordinary images. Need to create a mixture of ordinary and bizarre images.

For Free Recall, you can use the First-Letter or acronym technique. Form a word out of the 1st letters of the words you want to remember. Like the names of the 5 Great Lakes: Huron, Ontario, Michigan, Erie, Superior = HOMES.

Mnemonics for Serial Learning: story construction, pegword system, method of loci.

Mnemonic for free recall or serial learning: 1st-letter (acronymn) technique.

Mnemonics for paired-associate learning: Keyword method, face-name mnemonic.

KEYWORD METHOD has 2 mediators between stimulus and response,

STIMULUS Keyword Mental picture RESPONSE

Bahnhopf barn hops barn hops down Train station tracks in train station

Weak link, requires repetitive practice. Also, Sentence works as well as picture; it just has to show a relationship between keyword and English word.

16

FACE-NAME MNEMONIC (invented by Harry Lorayne, author and stage performer), similar to keyword mnemonic,

STIMULUS Mental picture Substitute word for name RESPONSE

Big ears Carrots are Carrot Garrett earrings

In keyword method, the keyword triggers the mental picture. In face-name mnemonic, the stimulus triggers the mental picture.

Mediated Transfer in the A-B, A-C Paradigm

In paired-associate learning, the A-B, A-C arrangement usually produces negative transfer.

However, it can also produce positive transfer if an intermediary list is learned between the A-B and A-C lists that establishes associations between Items B and C.

The following diagrams show how one set of associations can serve as a mediator and help us learn another set of associations. In the Experimental Group, Item B serves as a mediator between A and C. In the Control Group, Item X replaces B and it has no links to A and C. As a result, in Phase 3 the Experimental Group learns A-C faster than does the Control Group.

Group Phase 1 Phase 2 Phase 3 France Preparation* Italy ChainingExp A-B B-C A-CControl A-B X-C A-C Hello-Bonjour Bonjour-Buongiorno Hello-Buongiorno

Stimulus EquivalenceExp A-B C-B A-CControl A-B C-X A-C Hello-Bonjour Buongiorno-Bonjour Hello- Buongiorno

Response EquivalenceExp B-A B-C A-CControl B-A X-C A-C Bonjour-Hello Bonjour-Buongiorno Hello- Buongiorno____________________________________________________________ *Preparation (mediation stage): associate the two foreign languages with each other.

17

Mediated transfer shows the advantages of thinking about information and establishing new associations by relating one idea to another. This process of relating one idea to another is called elaboration.

Elaboration is the key to improving learning and memory. CONCEPT LEARNING

A concept is a category that we use to class together objects, situations, or events that have certain features in common.

There are two kinds of concepts.

Formal (artificial) concepts are defined by a specific rule, like the qualifications for voting or holding a driver’s license.A specific case either is or is not a member of the conceptual category.

Natural concepts are learned through everyday experience with a variety of examples and people may disagree on how well a specific case represents the concept, for example, the concept of furniture: Is a sofa an example of furniture? Is a floor lamp?

Or the concept of vehicle:Is a bus a vehicle? Is an elevator? A raft? A horse?

These all fit the dictionary.com definition: “any means in or by which someone travels or something is carried or conveyed; a means of conveyance or transport.” But they’re not all equally good (typical) examples.

List of Conceptual Rules (Pairs of Concepts are Opposites (Complementary)

Level 1 (refer to presence or absence of a single attribute) Affirmation Negation

Level 2 (refer to presence or absence of two attributes) Conjunction Disjunctive Absence

Disjunction Conjunctive Absence

Exclusion Implication

18

Level 3 (refer to presence or absence of pairs of attributes) Either/Or Both/Neither……………………………………………………………………………………………………UNIT 3

Memory Span Test

You’re given a list of digits or letters and immediately afterwards you’re asked to repeat those items in their original order. The longest list you can repeat without an error represents the span of your immediate memory. This memory span theoretically measures the capacity of the short-term store (short-term memory). Theoretically, the capacity limit of short-term memory applies to the number of units of information but not to the amount of information.Chunk: a unit of information that’s composed of smaller units.

Bit: the smallest unit of information.

Chunking: grouping units of information in order to expand the amount of information held at one time in short term memory.

EXECUTIVE SYSTEM: a cognitive process that controls other cognitive processes and allocates our attention to different aspects of a task. It usually comes into play when we need to adapt to new situations in which our habits and automatic actions would be ineffective.

Typical situations: Planning and decision-making. Correcting errors; trouble-shooting. Overcoming strong habitual responses: preventing a response or stopping it once it has started. Baddeley’s Model of Working Memory an elaboration of Atkinson and Shiffrin’s concept of the short-term store (STS).

Working Memory has 2 functions: (1) to store speech-based and visual information; (2) to coordinate and manipulate the information currently being held in it.

Atkinson and Shiffrin’s STS stores speech-based information and carries out “control processes” necessary for entering information into long-term memory (like rehearsal and retrieving relevant information from long-term memory to encode new information, for example, chunking).

19

Central Executive (e.g., allocation of attention to concurrent tasks, retrieving relevant info from LTM)

Visuospatial sketchpad Phonological Loop (similar to STS) (Visuo: images; spatial: maps) (subvocal repetition)

EXPERIMENT BY WRIGHT ON MEMORY FOR SERIAL LISTS IN PIGEONS, MONKEYS, AND PEOPLE

Serial-Probe Recognition Task

The subject faced two small screens (5 inches by 4 inches), one above the other.

On each trial, a series of 4 color slides was projected on the top screen (1 second per slide for monkeys and people, 2 seconds per slide for pigeons). Pigeons and monkeys saw travel slides and the people saw abstract designs that were harder to remember.

Upper Screen: Slide 1 -> Slide 2 -> Slide 3 -> Slide 4 -> ……Delay….. ->

After a delay (0 – 10 seconds for pigeons, 0 – 30 seconds for monkeys, and 0 – 100 seconds for humans), a probe slide was projected on the lower screen that was either the SAME as one of previous four slides or DIFFERENT from all of the slides.

On a “Same” Trial:Lower Screen: Probe was the same as Slide 1, 2, 3, or 4.

On a “Different” Trial:Lower Screen: Probe was different from all four slides .

Each day, 10 Same Trials and 10 Different Trials were presented in a randomized order, with the probe delay held constant.

If the probe was the same as a slide just shown on the upper screen, the response for humans and monkeys was to push a lever to the right; for pigeons it was to peck a disk on the right side. If the probe was different, the response for humans and monkeys was to push a lever to the left; for pigeons it was to peck a disk on the left side.

20

Reinforcers for correct responses were food for the monkeys and pigeons and just a tone for the humans.

Results

Looking at just the Same Trials, the basic question was: How did the percent of responses that were correct vary with the serial position of the slide that was probed?

In free recall experiments with humans, the serial position curve is U-shaped. The descending part is the primacy effect and the ascending part is the recency effect. Multistore theories of memory say that primacy represents retrieval from long-term memory and recency represents retrieval from short-term memory

IF YOU CHECK THE GRAPHS ON PAGE 237 OF THE TEXT, you will see a U-shaped function for each species, implying that that the U-shaped function has a biological basis.

BUT the U-shaped function occurred only at intermediate delays: 2 seconds for pigeons, 10 seconds for monkeys, and 25 seconds for humans. The text doesn’t say what happened at the other delays and what the researchers had to say about it. Here’s what happened:

At the shortest delays there was only a recency effect,

% Correct

Serial Position of Probed Slide

At the longest delays there was only a primacy effect,

% Correct

Serial Position of Probed Slide

So the U-shaped function was only a transition stage between the two other functions. There was no single function. The multistore model does not explain these complex findings.

21

Researchers’ Interpretation The researchers argued that 2-factor interference theory handled the results the best.

In traditional interference studies, RI decreases over time and PI increases over time. 2-Factor theory says that learning the 2nd list extinguishes 1st-list responses. As time goes by, 1st-List responses spontaneously recover and increasingly compete with 2nd-list responses.

1. At the shortest delay, the end items interfered with memory for the beginning items (retroactive interference). (Comment: This could represent extinction.)

2. At the intermediate delays, memory for the beginning items improved. (Comment: This could represent spontaneous recovery.)

3. As the beginning items got stronger, they interfered more with memory for the end items. (proactive interference). (Comment: This could represent response competition.)

RETRIEVAL-INDUCED FORGETTING: BASIC PROCEDURES AND PRINCIPLES

(From Anderson, M. C. (2003). Rethinking interference theory: executive control and the mechanisms of forgetting. Journal of Memory and Language, 49, 415-445.)

Background

Classical interference theory said that forgetting occurs when we learn new information. In other words, forgetting is a side effect of storing new memories. The current view is that forgetting results from the process of retrieving information, not storing it.

To retrieve a piece of information we have to suppress other information that comes to mind in that situation. It’s like the Stroop Effect, where it takes longer to name a color if that color is used to spell out the name of a different color like BLUE than the name of the correct color, RED. In the first case, the visual pattern (color + letters) produces competing responses because we have conflicting tendencies to both name and read the same stimulus. The problem has nothing to do with storing new memories; we can already name and read the stimulus. It’s a retrieval issue.

Basic problem: there are two competing responses associated with the same stimulus. Stimulus: BLUE

22

Stronger (prepotent) response: “Blue” Weaker response: “Red” We must override the prepotent response to state the weaker response.

The situation is similar to the A-B, A-C, procedure in studies of paired-associate learning. As discussed in Unit 2, this arrangement causes negative transfer when we learn List 2, the A-C list, because the B responses keep coming to mind and we have to suppress (inhibit) them. It also causes retroactive interference for the List 1 responses when we try to remember them right after learning List 2. As discussed in connection with the two-factor theory of interference, this suppression was cleary shown in the experiment by Barnes and Underwood (1959):

In cognitive theory, the mechanism that overrides the prepotent response is the Executive System that we previously discussed (see notes above). Overriding responses is one of its main functions. It comes into play both when we learn new information and when we attempt to retrieve information that is already in memory. According to M. C. Anderson, the process that the central executive uses to override competing information is inhibition. This is similar to classical interference theory.

Procedures and Basic Findings

Participants study a paired-associate list where the stimulus items are the names of conceptual categories like Fruits and Drinks and the response items are exemplars (positive instances) of those categories. A list can have several categories.

23

FRUIT-appleDRINK-vodkaDRINK-ginFRUIT-banana

After presentation of the list, participants practice retrieving some of the items in one of the categories. This is done by showing them the name of the category and the first letters of one of the words that went with it:

FRUIT-ap _____

Note that a distinction is now made between a practiced category (fruit) and an unpracticed category (drink).

A later test shows that the probability of recalling apple goes up but the probability of recallingbanana goes down in comparison to recall of words from the unpracticed category.

Typical recall percentages are shown below:Within-Category Retrieval-Induced Forgetting:

Practiced Category Unpracticed Category Fruits Drinks

Orange Banana Scotch Rum (Practiced word) (Unpracticed word)

Cross-Category Retrieval-Induced Forgetting:

.73 .38 .50 .50

24

Experimental Group Practiced Category Unpracticed Category Red Objects Food

Blood Tomato Radish Bread(Practiced word) (Unpracticed word)

Control Group Food

Encoding-specificity principle: the chances of retrieving information are greater the more similar the situation during retrieval is to the situation during original learning.

Transfer-appropriate processing: retrieval is facilitated if the mental operations used to retrieve information are the same as those used to encode it.

Mental operations during encoding: generating associations, generating mental pictures, perceptual recognition

Explicit test of memory: subject is asked to state previously learned information or to answer questions about it.

.74 .22 .22 .36Note: All words are shown to participants in the same

.39 .44

Radish Bread

Control Group: The Red Objects category is not studied

In the Experimental Group, recall of radish was suppressed even though it was in an unpracticed category because participants on their own placed it in the Red Objects category. This shows that the same word can belong to different categories at the same time and this allows interference to spread across categories.

25

Implicit test of memory: a task that produces behavior that is potentially influenced by previously learned information but does not explicitly ask about that information.

Word fragment completion: subject is given some letters of a word while other letters are left blank, and subject is asked to complete the word.

Priming: the probability of retrieving the target information is increased if the subject was previously exposed to related information (the “prime”).

Repetition priming: The target was previously presented as part of another task (e.g., “table”) and acts as the prime in a test of implicit memory such as word fragment completion.

TAB______

Picture fragment naming: another test of implicit memory. Weldon & Roediger found that subjects are more likely to label a fragmented (obscured) picture of an object if they recently viewed a picture of it than a word naming it. However, they were more likely to complete a word fragment naming that object if the recently saw the word than the picture.

SEE TEXT, PAGE 209

Interpretation: Transfer-appropriate processing is an example of the encoding specificity principle. When we encode information (put it into memory), part of the context that we encode is the mental operations that we used during studying. If those same mental operations are used during

Type of information studied:

26

retrieval, the chances of retrieving the information that we encoded are improved.

Looking at pictures involves perceptual recognition processes. Reading words involves semantic processes (activating representations of information that are connected to words). Visual processing and semantic processing take place in different parts of the brain (visual: in occipetal cortex in the back of the brain; semantic: information goes from occipetal cortex to Wernicke' s area in the temporal and parietal lobes—you won't be tested on brain anatomy; it's just to emphasize the different kinds of processing for pictures and words). On the left side of the diagram, above, perceptual recognition was used during retrieval so the chances of completing the picture with the studied information were better if perceptual recognition was also used during studying.

On the right side of the diagram, semantic processes were used during retrieval so the chances of completing the word with the studied information were better if semantic processes were also used during studying.

…………………………………………………………………………………… RECONSTRUCTIVE MEMORY

There are two forms of retrieval: reproduction and reconstruction.

Reproduction: bringing information to mind exactly as it was originally stored (typical form of retrieval studied in the Ebbinghaus tradition of presenting lists of nonsense syllables or words and requiring exact recall).

Reconstruction: bringing to mind fragments of originally stored information plus invented material that fills in the gaps without recognizing that part of your recollection is fictitious.

The invented material draws upon previously stored schemas, conceptual frameworks that help us organize, understand, and behaviorally adapt to situations. These preconceptions tell us what should be there and they can lead to distortions in our recollections of what was there.

Bartlett’s Research

The concept of a schema was introduced by Federic Bartlett in pioneering studies of memory for stories (the term, schema, was introduced by Piaget). In his method of reproduction, participants would read a paragraph describing a complicated story with a lot of unusual details and then they would come back after various periods of time and attempt to write down as much of the story as

27

possible. They read the story twice, recalled it 15 minutes later, and then attempted to reproduce it after periods of weeks, months, and in some cases years. Bartlett’s methods of testing were not fully described and we don’t know what instructions he gave to the participants.

Bartlett wanted to get away from Ebbinghaus’s approach of using nonsense syllables to eliminate effects of personal interests and culture on memory. Bartlett maintained that we had to look at memory in the context of other influences on a person’s thinking.

Bartlett was mostly concerned with qualitative changes in memory over time. In the Ebbinghaus tradition, the focus is on quantitative changes. Shown below is one of Bartlett’s stories with some key parts are highlighted because of the ways in which they were distorted (discussed below).

Basic Findings

1. As time went by, more and more details were omitted.

Here is what one participant wrote after 2 weeks about the part highlighted in blue:

The War of the Ghosts

There were two young men down by the river Equlac who were hunting seals. It became foggy. The young men heard paddles and canoes. They said, "there must be a war party." They hid behind a log. Canoes were on the river, one of the canoes approached the young men. There were five men in the canoe. One of the men said "What do you think? We are going down the river to make war. Do you want to come along?" One of the young men said "I don't have arrows" "We have arrows in the canoe." The other young man said "I can't go. My relatives don't know where I am." He turns to the other "But you can go." So he goes with them. They go down the river to a town on the other side of Kalama. They go to war on the river. Many men were killed. Then one of the warriors said "The Indian has been shot. We should go." Then the young man thinks "I am with ghosts. But I do not feel sick." They go back to Equlac. The young man goes home and builds a fire. He tells his relatives "I accompanied the ghosts, many men were killed. They told me I had been shot but I don't feel sick." The sun rose and he fell. Then black came out of his mouth. His face became distorted. He was dead.

So the young man went with them, and they fought the people, and many were killed on both sides. And then he heard shouting: “The Indian is wounded. Let us return.” And he heard the people say: “They are the ghosts.” He did not know he was wounded, and returned in Etishu (?). The people collected round him and bathed his wounds, and he said he had fought with the ghosts. Then he became quiet. But in the night he was convulsed, and something black came out of his mouth.

And the people cried: “He is dead.”

28

2. Details that were remembered were often simplified, and if they were strange to the participants (British college students), they were transformed into something familiar, so canoes became boats and hunting seals became fishing.

3. Long-term recollections were grossly distorted, preserving just one or two striking details.

4. In general, the reproductions reflected the interests, personality, and culture of the participants.

EVALUATING EYEWITNESS TESTIMONY

Bartlett's studies basically show that a person's recollections are highly susceptible to distortions and have to be evaluated cautiously. This is especially important in police interrogations and eyewitness testimony at trials. A lot of research has shown that recollections can be distorted in predictable ways by the kinds of questions that an interrogator asks. A pioneering researcher in this field is Elizabeth Loftus. Discussed below are findings from study by Loftus and Palmer (1974).

Leading Questions

Participants watched a brief movie of a traffic accident and were then asked about how fast the car was going when it bumped, collided, contacted, hit, or smashed.

Average speed in miles per hour,

Contacted: 31.8 Hit: 34.0 Bumped: 38.1Collided: 39.3Smashed: 40.8

The wording of the question affected participants' judgments of speed. These judgments could then be encoded and stored and later recalled in the distorted form at a trial.

Misinformation Paradigm

29

Immediately after viewing a clip of a traffic accident, questions are asked that are designed to insert false details into participants' internal representation of the experience. The theory is that the representation is still in a fragile state and is especially susceptible to distortion.

Participants viewed a 3 minute film: a car collided with a baby carriage pushed by a man.

Immediately afterwards:Questionnaire: 40 “filler” questions about things in the film.

Control Group: filler questions only.

Group D (“Direct Questions”): 40 filler questions + 5 ‘direct questions” about things not in the film: such as

“Did you see a school bus in the film?”“Did you see a barn in the film?”

Group F (“Questions with False Presuppositions”):40 filler questions + 5 questions about things not in the film that falsely implied they were there:

“Did you see the children getting on the school bus?” No school bus.“Did you see a station wagon parked in front of the barn?” No barn.

1 week later: all groups were tested with 20 questions about things in the film plus 5 questions about things not in the film previously referred to by the direct and false presupposition questions.

Average percentage of “yes” to questions about things not in the film:

Control: 8.4%

Direct (D): 15. 6%

False Presupp (F): 29.2%

Memories can be distorted by thinking of objects that were not observed and then accidentally embedding those objects in the memories. This process would be a form of encoding failure.

We can only remember what we encode. An eyewitness could accurately and confidentally testify about what they remember but still be wrong about what happened because they encoded the events inaccurately.

30

………………………………………………………………………………………………