33
1 Intermediate SPSS (1) Hypothesis Testing and Inferential Statistics Tutorial Goal: Building and testing hypotheses using inferential statistics in SPSS. This workshop covers parametric and nonparametric tests, concentrating on correlation, chi-square, and t-tests. Participants learn how to understand, analyze and report results. Ok, let’s review somewhat from our last workshop. What is statistics? First, what is statistics? Statistics is the science and practice of developing knowledge through the use of empirical data expressed in quantitative form (http://www.answers.com/topic/statistics-2?method=6 ).” So, you are basically posing a question about something in the Social Sciences and using numbers to answer it. Some examples of these questions are: Do countries with stricter gun control laws have fewer deaths by firearms? What are the best methods for teaching? What factors cause a disease to spread from one place to another? Do religious views and class affect opinions about euthanasia? You can answer these questions by using numbers. For statistics, there are four kinds of levels of measurement for the variable. All your analyses extend from what kind of level your variable is. They are NOIR. (N)ominal (O)rdinal (I)nterval (R)atio Let’s talk about each one. Nominal means that the number simply represents a category of objects. There is no measured different among the objects or people. Some examples are giving states numbers (N.Y. 1, Connecticut 2, R.I. 3), assigning a number for gender (male 1, female 2), or designating college major (History 1, Business 2, Sociology 3). You are just assigning a number to something. Ordinal means the larger number for the object is truly larger in some sort of amount. This typically means rank. Some examples are 1 st , 2 nd , and 3 rd places in a contest, or preferences for different movies. However, there is no exactly measured difference among the objects. We don’t know definitively how much larger or better 1 st is compared to 2 nd . We just know 1 st is somehow larger than 2 nd . Interval means, like Ordinal, that there is a rank for the objects or people, but there is also a measurement for the ranking. Some examples are degrees Celsius or Fahrenheit. We know that the different between 98 and 99 degrees is the difference of the amount of mercury in a thermometer. Also, the difference between 42 and 43 degrees is the same amount between 98 and 99. However, there is no true zero, which stands for a complete lack of the object being measured. 0 degree does not mean there is no mercury, for example.

Intermediates Pss 1

Embed Size (px)

DESCRIPTION

www

Citation preview

Page 1: Intermediates Pss 1

1

Intermediate SPSS (1) Hypothesis Testing and Inferential Statistics

Tutorial Goal: Building and testing hypotheses using inferential statistics in SPSS. This workshop covers

parametric and nonparametric tests, concentrating on correlation, chi-square, and t-tests. Participants learn how

to understand, analyze and report results.

Ok, let’s review somewhat from our last workshop.

What is statistics?

First, what is statistics?

“Statistics is the science and practice of developing knowledge through the use of empirical data expressed in

quantitative form (http://www.answers.com/topic/statistics-2?method=6).”

So, you are basically posing a question about something in the Social Sciences and using numbers to answer it.

Some examples of these questions are:

Do countries with stricter gun control laws have fewer deaths by firearms?

What are the best methods for teaching?

What factors cause a disease to spread from one place to another?

Do religious views and class affect opinions about euthanasia?

You can answer these questions by using numbers. For statistics, there are four kinds of levels of measurement

for the variable. All your analyses extend from what kind of level your variable is. They are NOIR.

(N)ominal

(O)rdinal

(I)nterval

(R)atio

Let’s talk about each one.

Nominal means that the number simply represents a category of objects. There is no measured different among

the objects or people. Some examples are giving states numbers (N.Y. 1, Connecticut 2, R.I. 3), assigning a

number for gender (male 1, female 2), or designating college major (History 1, Business 2, Sociology 3). You

are just assigning a number to something.

Ordinal means the larger number for the object is truly larger in some sort of amount. This typically means

rank. Some examples are 1st, 2

nd, and 3

rd places in a contest, or preferences for different movies. However,

there is no exactly measured difference among the objects. We don’t know definitively how much larger or

better 1st is compared to 2

nd. We just know 1

st is somehow larger than 2

nd.

Interval means, like Ordinal, that there is a rank for the objects or people, but there is also a measurement for

the ranking. Some examples are degrees Celsius or Fahrenheit. We know that the different between 98 and 99

degrees is the difference of the amount of mercury in a thermometer. Also, the difference between 42 and 43

degrees is the same amount between 98 and 99. However, there is no true zero, which stands for a complete

lack of the object being measured. 0 degree does not mean there is no mercury, for example.

Page 2: Intermediates Pss 1

2

Ratio means, like Interval, that there is a measurement for the ranking, but there is also a true zero. A true zero

means that there is lack of the quality being measured. Some examples are income, where the difference

between $10,000 and $11,000 is known and zero means complete lack of income.

These levels are very important and we will be discussing them more as we go on. Nominal and Ordinal are

called Nonparametric Data, and Interval and Ratio are called Parametric Data. The statistical analyses that

you can use are dependent on what level your data are. Specifically, if you can make a logical mean using your

data, then you can use parametric data.

In this tutorial, we are interested in Inferential Statistics. This form of statistics tries to make conclusions

about a whole group from one sample from that group. So, we have two important concepts. First, population

means the entire group of whatever you’re studying. Second, a sample is a subset of the population. If you’re

trying to do research, studying a whole population is probably out of the question. A sample is easier to obtain

and you can use the sample to surmise how the whole population behaves. Of course, it has to be a random

sample, which means that anyone or anything from the population has an equal chance of falling into the

sample. If not, then you have bias, which means that the sample is not an accurate picture of the whole

population.

Ok, now that we understand what a population and sample are, we need to know what probability theory is.

Probability Theory is “the branch of mathematics that studies the likelihood of occurrence of random events in

order to predict the behavior of defined systems” (http://www.answers.com/probability+theory&r=67). So, we

want to apply the theories of probability on this sample to infer what the whole population does. The best way

to understand this is by looking at dice and how they behave.

If you rolled one die, what is the chance you’d get a five?

1/6

Sample

Population

Page 3: Intermediates Pss 1

3

If you rolled two dice, what is the chance you’d get 2 fives?

1/36

Ok, now look at our sample set, which is all possible outcomes. If you have two dice, the following chart has

all the 36 possible outcomes:

(http://www.edcollins.com/backgammon/diceprob.htm)

So, the more chances you have for that outcome, the higher the probability you’ll have to get that outcome. For

example, from all our possible outcomes, the possible outcome of “7” is 1/6, whereas the possible outcome of

“2” is only 1/36.

A good graphic for this probability is seen at a web site called Introduction to Probability Models. Here you

can run a simulation of rolling two dice. The right panel below shows the result of the dice on the X axis and

the number of times on the Y axis. The first chart shows the result from rolling two dice ten times.

Result Probability

2 1/36

3 2/36

4 3/36

5 4/36

6 5/36

7 6/36

8 5/36

9 4/36

10 3/36

11 2/36

12 1/36

Page 4: Intermediates Pss 1

4

Rolling two dice twenty times.

And finally, rolling two dice one hundred times.

(http://www.math.csusb.edu/faculty/stanton/m262/intro_prob_models/intro_prob_models.html)

You can see that the outcomes with more probability, numbers 6, 7, and 8, build up more quickly. You can also

see that this builds up as a bell-shaped curve. If it’s considered a normal distribution, you should see this kind

of curve. So, the numbers with more probability are in the middle and those without a high probability are on

the extremes. This is what statistics is all about. It’s about seeing what number has a high probability of

occurring and what doesn’t.

Subsequently, two important ideas from distribution of outcome are central tendency and variance. Let’s

explore these essential ideas for a moment.

Page 5: Intermediates Pss 1

5

(Graphic from http://www.maximumiq.com/iq-tests-stats.php)

An IQ test is a perfect example of central tendency and variance. Your result on an IQ test is literally the

comparison of your result with everybody else’s who has taken the test. Millions of people take these tests.

Very few people would score low, and there a very few geniuses around who would score high. The majority

of us have average IQs. As seen in the graphic above, IQ results, when plotted out, have a normal distribution

where the majority of results cluster in the middle and results that are lower and higher are infrequent and lessen

the farther away from the center of the results.

The central tendency is measured usually by the mean (All cases added and then divided by the number of

cases). So, a score of 100 on an IQ is the mean. It’s an “average” intelligence. Remember, the results of the

majority of people bunch around 100. Variance is how far the score falls from the mean. If most of the scores

cluster around the mean, then there is low variance. It looks like a bell curve, where most of the results are in

the middle taking the shape of a bell. If the variance is high, the curve in the middle is not as high and the

results are more spread out. So, with statistics, we’re trying to figure out if our numbers fall near the central

tendency, which means that maybe there is nothing unusual about them, or if they fall farther away towards the

extremes and are unusual. Remember from our discussion of populations, samples and IQs. The average is

100, so if you take a sample from the population, you should expect an average IQ in that sample to be around

100. However, if the average IQ in that sample turns out to be 130, then statistically your sample is not

average.

STOP! The difference between ordinal and

interval is often slight, and sometimes you can get

away with using parametric tests for ordinal data.

Ok, first, when doing statistics, you need to choose

the right test. As we’ve talked about, there two

types of data: nonparametric and parametric. This

makes a big distinction in what tests we can perform.

If our data is nominal and ordinal, there is no mean

and so you do nonparametric tests.

Page 6: Intermediates Pss 1

6

There are some assumptions about data that you should be aware of. These also affect which test you choose.

Nonparametric Parametric

Nominal/Ordinal Data

Random sampling

Interval/Ratio Data

Random sampling

Normal Distribution

Equal variances of the scores

in populations that the samples

come from.

Since the parametric data have more assumptions, the parametric tests are considered more powerful when the

assumptions are met. Powerful means that these tests are better at picking up differences in variables in the

population. Also they are more robust to the violations of the assumptions. So, if the assumptions are not

completely met, you can still get accurate results. The only assumption that’s nonnegotiable is the level of

measurement.

When you choose a statistical analysis, you need to do two things, make a hypothesis and decide on

significance. We are now going to talk about each one before we go on to the test.

1. Making hypotheses is an essential part of every test. These hypotheses always deal with how the numbers

of your sample relate to the numbers of the population. First, start with a null hypothesis and then an alternative

hypothesis.

Null Hypothesis (HO) states that the numbers of your sample do not differ significantly from the numbers of

the population. For example, you walk into any old restaurant and do an IQ test on 30 customers. The HO says

that the mean of their IQs should not differ significantly from the mean IQ of the population.

Alternative Hypothesis (HA) states that the numbers of your sample differ significantly from the numbers of

the population. For example, we heard that the restaurant has intelligence boosting spices in the food, so our

HA is that the sample of 30 people from the restaurant has a mean IQ of 130, which is much higher than the

population’s IQ.

P is significance. So, if you see a result reported p<.05, it means that the likelihood that the result is due to

chance is less than .05. You, when doing research, have .05 likelihood of chance that you can tolerate. It is

conventional that significance is set at .05, but it can even be lower at .01 or even lower depending on how

daring you want to be. One tailed or two tailed is where you put this likelihood of chance on your distribution

curve. The likelihood of chance is also called alpha.

2. Significance shows us the likelihood that a

particular result is due to chance. Remember back to

our normal distribution and IQs. What are the chances

that you randomly go to a restaurant and get a group of

people to do an IQ test and the mean is 130? Pretty

slim. As you can see in the graph to the right, about

97% of people have IQs below 130. That’s the

concept behind significance. We are seeing what the

likelihood is of getting a certain result.

Page 7: Intermediates Pss 1

7

One-tailed tests are used if you have a directional hypothesis. Mainly, you put the .05 of chance in the

direction of your alternative hypothesis. So, if you say you’re going to find a sample with a mean of 130

and the mean is 100, you put the whole .05 in the direction of the hypothesis, which is above the mean.

Two-tailed tests are used when you are not certain in which direction your alternative hypothesis goes.

So, if you hypothesize that a sample mean is somehow different than the population’s mean, in either a

positive or negative direction, then split the alpha into two parts of .025 and place them at either ends of

the normal distribution.

After you have performed a test, you verbalize the result in a sentence. Also you usually report five things: test

result, degrees of freedom (df), number of sample, significance and one- or two-tailed.

1. Test Result: Each test has its own mathematical equation. For our purposes in SPSS, we do not need to

know the exact mechanics of each equation. We will just discuss the big picture of each test and roughly

what it’s doing. Basically, for these analyses here, the higher the result, the better our chances of reaching

significance and rejecting the H0. However, when reporting the result, you need to report the result of the

equation. This will be pointed out in each of our tests.

2. Degrees of Freedom (df): The df is the number of frequencies that is allowed to vary, which is the number

of observations minus the number of constraints. This point is very technical and really doesn’t affect your

research. You just need to report it. You only need to report this for chi-square, t-tests, correlation and

ANOVAs.

3. Number: Number of cases in your sample.

4. One- or two-tailed: Where you put your chance of randomness (only with parametric tests).

.05 due to

chance

.025 due to

chance

.025 due to

chance

Page 8: Intermediates Pss 1

8

5. Significance: You need to report the level of significance that your result reached.

From these four things, the test result and significance are the most important. Basically, you need a test result

of a number high enough to reach significance. For example, if I were doing chi-square with 2 df, I need a test

result (critical value) of at least 5.992 to reach significance. If you reach significance, you can reject the HO

and accept the HA (You always talk about accepting hypotheses). Don’t worry, though. SPSS does all the

math. You only need to understand and report the results.

df\area .050 .025 .010 .005

1 3.84146 5.02389 6.63490 7.87944

2 5.99146 7.37776 9.21034 10.59663

3 7.81473 9.34840 11.34487 12.83816

4 9.48773 11.14329 13.27670 14.86026

5 11.07050 12.83250 15.08627 16.74960

So, with our restaurant and IQ example, the result would be reported as “The mean IQ of 130.76 for the 30

eaters at the restaurant was significantly higher than the national average IQ, t (29) = 20.650, p < .001, one-

tailed).”

In this lesson, you will be introduced to three of the major statistical tests: chi-square, correlation, and t-tests.

1. CHI-SQUARE: This test is non-parametric, so it is appropriate for nominal data. Chi-square (written χ2

whose symbol you can find among the Greek letters in Microsoft Word) is used as a test of frequencies, mostly

percentages and proportions. The null hypothesis is that the numbers or frequencies that fall into categories are

not different from a distribution caused by chance. It assumes that randomness is equal distribution among the

categories.

There are two types of chi-square: goodness-of-fit and test-of-independence. Goodness-of-fit compares

frequencies of one variable against a hypothetical or known value. This test is not used quite often. Test-of-

independence compares frequencies of two or more variables, which is the more used test. Let’s practice this.

Minimum critical value to

reach significance at p =

.05 with 2 df.

Page 9: Intermediates Pss 1

9

Our data are from Pew Internet and American Life Project (http://www.pewinternet.org/index.asp), which

collects survey data in regards to people’s Internet use. These data were collected after the last presidential

election in 2004, specifically to gather data on how people behaved politically and in terms of using media. We

have three variables:

Polid: if the participant considered him/herself a Democrat or a Republican.

Email: if the participant is signed up for online political alerts or emails.

Camp: if the participant attends campaign rallies.

So, using chi-square, let’s compare how people voted to how they behaved politically. For example, the 2004

election was unusual in the fact that the Internet started to play a substantial role. Let’s see if Democrats

behaved differently statistically than Republicans. We’ll compare a new political behavior like using emails to

traditional ones like going to rallies. First, we need to set up our HOs and HAs:

Vote and Email

HO: Democrats and Republicans do not differ in signing up for online political alerts.

HA: Democrats signed up more for online political alerts than Republicans.

Vote and Camp

HO: Democrats and Republicans do not differ in going to campaign rallies.

HA: Democrats attended campaign rallies more often than Republicans.

Ok, let’s start SPSS and import our data. First, please go to the Statistics Tutorial page

(http://dl.lib.brown.edu/gateway/lrg.php?id=86&task=custom&contentid=931) and download and unzip the

SPSSDATA for the exercise SPSS 2. Remember, to unzip, you right click the zipped file and click on

Extract All.

1. Left-click onto Start from your Desktop and move your cursor over All Programs, which give you a menu

off all the programs.

Page 10: Intermediates Pss 1

10

2. Put your cursor over IBM SPSS Statistics 19, which brings up a pop-up menu. To start the program, left-

click onto IBM SPSS Statistics 19 (SPSS is available on the CIS computers at the Rockefeller Library under

the Computational menu).

3) You now receive the SPSS Data Editor window. Here you display your data and your variable information.

In the IBM SPSS Statistics 19, you are prompted to start by running a tutorial or opening an existing data set.

Make sure Open an existing data source is selected and click on OK.

4) In the Open File window, navigate to where you saved the SPSSDATA folder. Double-left click onto the

Chi-Square file to open it up.

All the data for this lesson are

obtained through different sites at the

Social Sciences Data Page at

http://dl.lib.brown.edu/gateway/lrg.p

hp?id=86&task=home

Or, if you can’t find what you’re

looking for, contact me, Tom at 3-

7978 or [email protected]

The word document, worksheetspss2.doc,

in the folder is for this exercise. You use

the worksheet to write out our hypotheses

and results. Every time you open up a

new .sav file, the old one remains open,

too. So, for our exercise, after we open a

new data file, close the old.

Page 11: Intermediates Pss 1

11

3. If you are not there, please go to the Variable View in the Data Editor. We are interested in the second

variable, email, which concerns the survey question “Have you ever signed up to receive email newsletters or

other online alerts containing the latest news about politics or the election?” 1) In the Values column, click on

the three dots icon to bring up the Value Labels dialog. You can see that this is a dichotomous variable.

There are two values: 1 for a yes and 2 for no. 2) Click Cancel.

Now we can perform chi-squares in SPSS on our data.

1. In the SPSS Data Editor, go to the Analyze menu, and select Descriptive Statistics. Left-click on

Crosstabs.

2

1

Page 12: Intermediates Pss 1

12

2. In the Crosstabs dialog, 1) select the variable Polid, click on the arrow and put it in the Column

window. 2) Select the two variables Q26, and Q27a (our Email and Rally variables), click on the arrow and

put them into the Rows window. Usually, you put the outcome you want to predict in the rows. So, you are

setting up a matrix of variables compared to other variables. This is visualized in the results.

3. So far, we have just set up a crosstabulation of the variables. Now we have to select the actual chi-square

analysis. 1) In the Crosstabs dialog, click on Statistics. 2) In the Crosstabs: Statistics dialog, select Chi-

square. 3) Click on Continue, which brings us back to the Crosstabs dialog.

1

2

1 2

3

Page 13: Intermediates Pss 1

13

4. Back in the Crosstabs dialog, 1) click on Cells. 2) In the Crosstabs: Cell Display dialog, select Column in

Percentages. We want to see what percentage of people for a certain candidate did what. 3) Click Continue to

go back to the Crosstabs dialog. Back in the Crosstabs dialog, 4) click OK to perform the analysis.

In the Output window, you see in the table of contents (TOC) on the left that groups of results were created.

The first is just the summary which we can ignore. In the View on the right, scroll down to the chi-square

results for our email variable.

1

3 4

2

Page 14: Intermediates Pss 1

14

Ok, for the first hypothesis about voting and online alerts, we do see differences in the numbers. Only 10.2% of

Republicans have ever signed up, compared to 14.8% of Democrats who did. However, we have to see if these

numbers reached significance and we can accept our alternative hypothesis or were these numbers by chance

and we retain the null hypothesis. Scroll down a little to the Chi-Square Tests.

In the chi-square chart, we are interested in the first test result, Pearson Chi-Square. This is the most common

chi-square result. Our significance was .041, so we reached significance and we can accept the alternative

hypothesis that Democrats signed up more for online political alerts than Republicans.

Remember, rule of thumb, you have to have a

minimum of 5 expected cases for each value or

chi-square shouldn’t be performed if your data set

is small. SPSS flags you on the bottom of the

table about the cell count.

Page 15: Intermediates Pss 1

15

Let’s now look at our second hypothesis about voting and attending rallies. Scroll down to the next results. For

this hypothesis, our numbers are similar. About 9.8 % of Republicans have ever attended rallies, but 13.1% of

Democrats have attended. However, the differences in the numbers are not as different as before. Did we reach

significance? Scroll down a little to the Chi-Square Tests.

We didn’t. The significance is .104 and we need it below .05 to reject the null hypothesis that this is due to

chance. Ok, this isn’t far off, but statistics is a conservative science. You either obtain the necessary numbers

or you don’t. So, we need to retain the null hypothesis that there was no statistically significant difference in

rally attendance.

Ok, let’s report our results: χ2(df, N = sample number) = result, p (<,=) number

Vote and Email

A higher percentage of Democrats (14.8%) signed up for online political alerts than Republicans (10.2%), χ2

(1,

N = 860) = 4.162, p = .041.

Vote and Camp

There was no statistically significant difference in the percentage of Democrats (13.1%) and Republicans

(9.8%) who go to campaign rallies, χ2

(1, N = 860) = 2.370, p = .124.

Page 16: Intermediates Pss 1

16

Please note: for this workshop, statistical citations follow APA format.

Reporting Significance can be a little tricky. Significance is never simply

zero.

If numbers are rounded off to .05 or .01, then use <, i.e., p < .05

If it’s not rounded off, use = , i.e., p = .213

If you see .000, just report it as p <.001

Please close all the results in the Output screen by clicking on the minus sign (-) next to the Crosstabs in the

table of contents (TOC), but leave the data window alone.

2. CORRELATION: This test shows the relationship between two variables. Specifically, we are looking to

see if one variable varies with another. For parametric data, you perform Pearson’s r, but if one of the

variables is nonparametric or the assumptions are not met, then you can perform Spearman’s ρ (pronounced

rho).

With correlation, there are ideals:

Perfect Positive Correlation: means that an increase in measurement of variable X means an equal

increase in measurement for variable Y.

Perfect Negative Correlation: means that an increase in measurement of variable X means an equal

decrease in measurement for variable Y.

X

121086420

Y

12

10

8

6

4

2

0

X

121086420

Y

12

10

8

6

4

2

0

Positive Correlation Negative Correlation

Page 17: Intermediates Pss 1

17

So, with correlation, we are trying to see if hypotheses like these are true:

Positive Correlation: “The more X, the more Y.”

Negative Correlation: “The more X, the less Y.”

When we are dealing with parametric correlation, we talk about the correlation coefficient or r. This r shows

the strength of the relationship between the two variables. The correlation coefficient can vary from 1 to -1.

When r = 1, there is a perfect positive relationship

When r = 0, there is no relationship

When r = -1, there is a perfect negative relationship

But, you can also see relationship with numbers smaller than that. Cohen & Cohen (1983) express

relationships, either positive or negative, as such:

When r is .1 to .3 = small correlation

When r is .3 to .5 = moderate correlation

When r is .5 and above = strong correlation

Ok, we have two variables, let’s say price of a product, variable X, and its consumption, variable Y. Through

collecting data and doing correlation, it was discovered that the higher the price, the less the product was

consumed.

PRICE

1101009080706050

CO

NS

UM

170

160

150

140

130

120

110

100

90

A nice example of correlation coefficients

(http://noppa5.pc.helsinki.fi/koe/corr/cor7.html)

However, be careful. Correlation does not necessarily mean causation. A study once found that the number

of tornadoes increased with the number of cars on the road. As a joke, it was theorized that the rotation of the

wheels caused all the tornadoes. Of course, it just simply meant that more tornadoes were observed with the

increase of car traffic.

Also, don’t forget to visualize your data. Sometimes there isn’t a linear relationship, but a curvilinear

relationship. For example, anxiety and test grades are related in a curvilinear relationship. If someone’s anxiety

in a test is low, then they probably won’t do well. They probably don’t care much. As anxiety increases, test

performance increases. They care and get nervous. However, if the person’s anxiety gets to be too much, then

the performance decreases. The anxiety is getting the better of him/her. There are analyses for curvilinear

relationships. Please see the help menu to explore that option.

Page 18: Intermediates Pss 1

18

ANXIETY

121086420

GR

AD

E

9.5

9.0

8.5

8.0

7.5

7.0

6.5

6.0

5.5

Pearson’s r is a correlation test using parametric data.

Poverty has a huge effect on society and behavior. For a sociological study, let’s see how poverty and murder

rates correlate. We have 2 variables: poverty data from the census

(http://factfinder.census.gov/home/saff/main.html?_lang=en) and murder rates from the Department of Justice

(http://bjsdata.ojp.usdoj.gov/dataonline/). Let’s import our data, establish our hypotheses and perform our

analyses.

1. In SPSS Data Editor, go to the File menu, select Open and left-click Data.

2. In the Open File window, double left-click the file Pearson and open it.

Page 19: Intermediates Pss 1

19

3. You see our two variables for each state. We have the percentage of poverty and number of murders per

100,000 people for each state. Both variables are parametric data.

Ok, now let’s establish our hypotheses.

HO: There is no correlate between poverty and murder in the United States.

HA: The murder rate positively correlates with the poverty rate in the United States.

4. After establishing our hypotheses, we can perform Pearson’s r in SPSS. In the Analyze menu, select

Correlate and left-click Bivariate.

Every time you open a data set, a new

SPSS Data Editor is open. For this

exercise, whenever we open a new data

set, you can close the one just worked

on.

Page 20: Intermediates Pss 1

20

5. In the Bivariate Correlations dialog, 1) select each variable, click the arrow and move them into the

Variables box. 2) As you can see in the Correlation Coefficients box, Pearson is automatically selected. In

the Test of Significance box, chance it to One-tailed. Remember, we have made a directional hypothesis. We

have hypothesized that the murder positively correlates (goes in the direction above 0) and we therefore need

one-tailed. 3) Click OK.

In the Output window, we have our results. Let’s look more closely at the results we need to report.

1. Our Pearson Correlation is .503, so it’s a strong correlation between poverty and murder. SPSS

immediately flags significant results with asterisks. One asterisk means significant at .05 and two asterisks

mean .01.

2. Our significance is .000, so we can reject the null hypothesis.

3. Our N is 50, so we have 50 cases.

Let’s report our results: r (sample) = result, p (< =) number

Poverty and Murder

There is a strong positive correlation between the poverty rate and the murder rate in the United States, r (50) =

0.503, p<.001, one-tailed.

Before we move on to the next analysis, let’s visual our data. This is a great way to understand data better. We

are going to explore our data using a scatterplot, which is a graph of the points where our X variable meets our

Y variable.

When your variables are ordinal

data, you should use the Spearman

option in Correlation Coefficients.

1

2

3

Page 21: Intermediates Pss 1

21

1. You can make graphs from the Data Editor or the Output window. Since we’re here, in the Output window,

go to the Graphs menu, and left-click Chart Builder.

2. 1) In the Gallery tab, under the Choose from: select Scatterplot. 2) Double-left click Simple Scatter.

1

2

Page 22: Intermediates Pss 1

22

3. 1) Drag the murder rate variable from the Variables field on the left into the X axis in the Chart Preview. 2)

Drag the poverty rate into the Y Axis.

4. Now you want to set up your labels, so you can label each case by the state name. 1) Click on the

Groups/Point ID tab. 2) Check of Point ID label. 3) Drag and drop the state variable into the Point ID box

in the Chart Preview. 4) Click OK.

2

1

1

3

2

4

Page 23: Intermediates Pss 1

23

In the Output window, you now have the scatterplot of our two variables. Just at first glance, you can see that

the points are loosely forming a line going in the upper right direction. Let’s put in a best fit line, which is the

correlation line.

1. First we need to open up the Chart Editor. Double leftt-click on the graph, and open up the Editor.

Page 24: Intermediates Pss 1

24

2. You now get the Chart Editor window. Let’s put in the best fit line. 1) Click on the Add Fit Line at Total

icon . This gives you a graphic depict of the correlation line. 2) The R Sq Linear is the r2 (0.253), or the

coefficient of determination. This says how much of the variance of one variable is explained by the other. 3)

Close the Properties dialog that appears.

3. Automatically, every case is labeled, but you can also label cases individually. 1) In the scatterplot, left-

click on a label and select them (They will take on a blue circle). 2) In the Elements pulldown, left-click on

Hide Data Labels.

1

1

2

3

2

Page 25: Intermediates Pss 1

25

4. We now want to individually label cases. Back in the Chart Editor, 1) click on Point Id and your

cursor turns into a cross-hairs. 2) Left-click on the upper-left most case and the state label Maryland appears.

If you left-click again, the name will disappear. Can you find Rhode Island?

4. Close the Chart Editor window. In the Output window, close the graph and correlations result with the

minus in the TOC (Don’t close the window), and let’s move on to the next analysis.

3. T-TESTS

T-tests are important tests for checking the difference between two groups using parametric data. What the test

does is calculates the mean of the two groups and sees if the difference between the two means is greater than

the variance between the two groups.

(http://www.socialresearchmethods.net/kb/stat_t.htm)

There are nonparametric

versions of t-tests, such as

Mann-Whitney test. Use

the Help menu to learn

about them.

Of these three options on the left, which one

seems to have two distinct groups? The

bottom one. Even though the difference in

the mean is the same in all three groups, the

third group has little overlap between the two

groups.

1

2

Page 26: Intermediates Pss 1

26

There are three types of t-tests:

Single-sample t-test: This tests the mean of a sample group with a hypothetical mean or an already known

mean.

Dependent-sample t-test: For this test, we test the difference in the means of two groups where each case is the

same. For example, we have a new teaching methodology for a semester class. The test participants take a test

before and after the course, and we compare the results to see if there is a difference in their performance.

Independent-sample t-test: For this test, we test the difference in the means of two groups where each case is

not the same. For example, to test a drug, we have an experimental group and a control group. We administer

the drug to the experimental group, and then we test the difference between the experimental group and control

group after a while.

Single-sample t-test

Let’s study some historical economic data about California.

1) Import the Ttests file. When prompted to save the results for the regression data, click no.

2. We have census data on 57 Metropolitan Statistical Areas (MSA) in California. The first variable, man90,

is the percentage of the labor force in manufacturing in those areas. Let’s assume we are doing an economics

study to show that California’s economy differed from the nation as a whole. For example, because California

is more service industry oriented, we want to show that there were fewer manufacturing jobs there then in the

whole country.

Page 27: Intermediates Pss 1

27

Let’s set up our hypotheses:

HO: The mean of the percentage of jobs in manufacturing in the MSAs in California is not statistically different

from the nation as a whole.

HA: The mean of the percentage of jobs in manufacturing in the MSAs in California is lower than that of the

nation as a whole.

3. In the Analyze menu, select Compare Means. Left-click One-Sample T Test.

4. In the One-Sample T Test dialog, 1) select man90, click the arrow and move it into the Test

Variable(s) box. 2) In the Test Value field, type in the mean you want to compare your group with. In this

case, it’s 16.2 for the country. 3) Click OK.

5. Let’s look at our results.

1

2

3

A. In One-Sample Statistics, we

know that the mean of the

MSAs is lower at 9.29%.

B. In One-Sample Test, we have

the t result, which is very high

and negative. The negative

sign says that our mean is

below the nation’s mean.

C. Df is the degrees of freedom

D. Sig. is the significance, and

.000 is very significant.

Page 28: Intermediates Pss 1

28

Report our results: t (df) = result, p (<,=) number

The percentage of jobs in California involved in manufacturing (M = 9.29%) was lower than the national

percentage of jobs involved in manufacturing (M = 16.2%), t (56) = -2749.984, p < .00025, one-tailed.

Note: This automatically does a 2-tailed test. If you want to report significance for a one-tailed test, state the

significance at p < .00025. SPSS rounds to the nearest third decimal, so .000 means .0005.

Dependent-sample t-test

Also using these data, let’s look at government jobs. In the 1980s, Ronald Reagan pushed to lower the amount

of federal jobs. Did he succeed in California? For each MSA in California, we have two variables, Fednum,

which is the difference in the number of federal employment from 1970 to 1990, and Statnum, which is the

difference in the number of state and local employment from 1970 to 1990. MSAs are our case and they are the

same in each group. Let’s set up our hypotheses:

HO: There was no statistically significant difference in the change of the number of federal employees and

state/local employees between 1970 and 1990.

HA: There was more growth in the number of state/local employees than federal between 1970/1990.

1. In the Analyze menu, select Compare Means. Left-click Paired-Sample T Test.

2. 1) Select the variable difference in number of federal employees (1970/1990) fednum. Click the arrow

and move it into the Pair 1 Variable1. 2) Select difference in number of state/local employees

(1970/1990) statnum. Click the arrow and move it into the Pair 1 Variable2. 3) Click OK.

2

1

1

3

1

Page 29: Intermediates Pss 1

29

4. From our results, we need to look at two charts, Paired Samples Statistics and Paired Samples Test.

Report our results: t (df) = result, p (<,=) number

The growth in the number of state/local employees (M = 11,318) was more than the mean growth in the number

of federal employees, (M = 625), t (56) = -3.949, p < .00025, one-tailed.

Please close all the results, but keep the Output window open.

Independent-sample t-test

We are now interested in doing a t-test, but where one case in one group doesn’t necessarily match with another

case in the other group. For this example, we will be using data from the General Social Survey (GSS) found at

(http://www.norc.org/GSS+Website/), which includes U.S. survey data since 1972 about a large variety of

question, from abortion to zodiac sign. For sociological research, we want to see if age relates to belief in life

after death. Please navigate to the Independentt file and open it.

A. In Paired Samples Statistics,

we can see the means. The

mean growth in federal

employees was only about

625, whereas state and local

grew at about 11,318. But is

it significant?

B. In the Paired

Samples Test, we see

the numbers we need

to result. The t was -

3.949, the df and it

was significant at .000.

Page 30: Intermediates Pss 1

30

We have two variables, age and postlife. Age is parametric data of each participant’s age. Postlife is coded by

belief in the after life. 1 means yes and 2 means no. So, we want to split the Postlife variable into two

groups, those who believe and those don’t believe. Then, using these two groups, we want to perform an

independent t-test between the two groups on the age variable and see if the mean age is statistically different.

Let’s establish our hypotheses:

HO: There is no difference in age between people who believe and who do not believe in an afterlife.

HA: People who believe in the afterlife are older than those who do not.

1. In the Analyze menu, select Compare Means. Left-click Independent-Samples T Test.

2. In the Independent-Samples T Test dialog, 1) select age of respondent [age] and move it into the Test

Variables box. 2) Select postlife and move it into the Grouping Variable. See how it becomes postfile (??).

We have preparing to select the values within the variable to define our groups. 3) Click Define Groups.

2

1

3

Page 31: Intermediates Pss 1

31

3. In the Define Groups dialog, 1) in Group 1: type in the value 1, which means belief, and in Group 2: type

in the value 2, which is the value for no belief. 2) Click Continue. Back in the Independent-Samples T Test

dialog, notice how the values have been set in the Grouping Variable – postlife (1,2). 3) Click OK.

Ok, let’s look at our results.

1

2

3

2

A. In Group Statistics, you see the mean for the

groups. The mean age for those who believe is

44.45, whereas the mean age for those who don’t

is 46.84.

B. In Independent Samples Test, you must first

check out if there was equal variances assumed

between the two groups. In Levene’s Test, if equal

variance assumed in p > .05, then use the first line of

results. If p < .05, then use the second line of results.

So, in this case, we need to use the second line.

C. In t-test for Equality of

Means, we have our results.

The t is -2.901, df and a

significance at .037. So, we

reject the null hypothesis, but

our directional hypothesis went

in the other direction!

Remember, divide the

significance in half for one-

tailed.

Page 32: Intermediates Pss 1

32

Now, let’s report our results: t (df) = result, p (<,=) number

The age of people who did believe in the afterlife was slightly lower (M = 44.45 years) than those who did not

(M = 46.84 years), t (417) = -2.091, p = .018.

Note: You can truncate the dfs.

This, however, is an interesting result. It shows the difference between a significant result and an important

result. Even though statistically this is significant, meaning that it’s not due to chance, it is probably not

important. The difference in age is slight, barely 2 years, and I would wager not really relevant in any research.

STATISTICAL ANALYSIS REVIEW

Analysis Data type Purpose Reporting

Chi-square

nonparametric Goodness-of-fit compares

the frequencies of one

variable against a

hypothetical value

Test-of-independence compares frequencies of two

or more variables

χ2(df, N = sample number) = result, p

(<,=) number

Correlation

parametric To show how two variables

vary together.

r (sample number) = result, p (< =)

number

Spearman’s

rho

nonparametric (all

ordinal data)

To show how two variables

vary together.

rho (sample number) = result, p (< =)

number

T-test parametric Single-sample t-test: tests

the mean of a sample group

with a hypothetical mean or

an already known mean.

Dependent-sample t-test:

tests the difference in the

means of two groups where

each case is the same.

Independent-sample t-test:

tests the difference in the

means of two groups where

each case is not the same.

t (df)= result, p(<=) result

nonparametric Mann-Whitney Mann-Whitney U (df) = result, p <

result

* Remember, you also need to mention in the text or in the statistical reporting if your analysis was one- or two-

tailed.

Page 33: Intermediates Pss 1

33

Online Statistics Help

Very good outline guides for SPSS by the Department of Psychology at the University of Nebraska

(http://www-class.unl.edu/psycrs/statpage/)

The California State University Social Sciences Research and Instructional Council Teaching Resources

Depository Home Page

(http://www.csubak.edu/ssric/welcome.htm)

Statistics at Texas A&M University

(http://www.stat.tamu.edu/spss.php)

Visualization of statistical analyses at Evotutor

(http://www.evotutor.org/Statistics/StatisticsA.html)

A New View of Statistics

(http://www.sportsci.org/resource/stats/contents.html)

HyperStat

(http://davidmlane.com/hyperstat/)

The Really Easy Statistics Site

(http://helios.bto.ed.ac.uk/bto/statistics/tress1.html)

Publishing Guide

American Psychological Publishing. Association (2001) Publication Manual of the American Psychological

Association. (5th

Ed.). Washington, D.C.

Morgan, S. E., Reichert, T. & Harrison, T. R. From Numbers to Words. Boston: Allyn and Bacon.

Contact Info:

Tom Stieve

[email protected]

863-7978

© Thomas Stieve