39
Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

Embed Size (px)

Citation preview

Page 1: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

Chapter FifteenChapter Fifteen

Copyright © 2006McGraw-Hill/Irwin

Data Analysis: Testing for Significant Differences

Page 2: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 2

1. Understand how to prepare graphical presentations of data.

2. Calculate the mean, median, and mode as measures of central tendency.

3. Explain the range and standard deviation of a frequency distribution as measures of dispersion.

4. Understand the difference between independent and related samples.

Learning Objectives

Page 3: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 3

5.Explain hypothesis testing and assess potential error in its use.

6.Understand univariate and bivariate statistical tests.

7.Apply and interpret the results of the ANOVA and n-way ANOVA statistical methods.

Learning Objectives

Page 4: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 4

• Basic statistics and descriptive analysis– common to all marketing research projects

– Central tendency and dispersion

– t-distribution and associated confidence interval estimation

– Hypothesis testing

– Analysis of variance

Value of Testing for Differences in Data

Page 5: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 16

• Three Measures of Central Tendency–strengths and weaknesses

1. Nominal Data–mode is the best measure

2. Median–ordinal data

3. Mean–interval or ratio data

Calculate the mean, median, and mode as measures of

central tendency

Measures of Central Tendency

Page 6: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 17

Calculate the mean, median, and mode as measures of

central tendencyExhibit 15.8

Page 7: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 18

Calculate the mean, median, and mode as measures of

central tendencyExhibit 15.9

Page 8: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 19

• Measures of Central Tendency–cannot tell the whole story about a distribution of responses

• Measures of Dispersion–how close to the mean or other measure of central tendency the rest of the values in the distribution fall

• Range–the distance between the smallest and largest value in a set of responses

Explain the range and standard deviation of a frequency distribution as measures

of dispersion

Measures of Dispersion

Page 9: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 20

• Standard Deviation–average distance of the dispersion values from the mean– Deviation–difference between a particular

response and the distribution mean– Average squared deviation–used as a

measure of dispersion for a distribution• Variance–average squared deviations about the

mean of a distribution of values

Explain the range and standard deviation of a frequency distribution as measures

of dispersion

Measures of Dispersion

Page 10: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 21

Explain the range and standard deviation of a frequency distribution as measures

of dispersionExhibit 15.10

Page 11: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 22

• Hypothesis–empirically testable though yet unproven statement developed in order to explain phenomena

– Preconceived notion of the relationships that the captured data should present–a hypothesis.

Explain hypothesis testing and assess potential error in its useHypothesis Testing

Page 12: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 23

• Independent Samples–two or more groups of responses that are tested as though they may come from different populations

• Related Samples–two or more groups of responses that originated from the sample population

• Paired sample–questions are independent–the respondents are the same

– Paired samples t-test--used for differences in related samples

Understand the difference between independent and related samplesHypothesis Testing

Page 13: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 24

• First Step–to develop the hypotheses that is to be tested

– Developed prior to the collection of data

– Developed as part of a research plan

– Make comparisons between two groups of respondents to determine if there are important differences between the groups

– Important considerations in hypothesis testing are:

• Magnitude of the difference between the means

• Size of the sample used to calculate the means

Explain hypothesis testing and assess potential error in its useHypothesis Testing

Page 14: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 25

• Null Hypothesis (Ho)–a statement that asserts the status quo

– Alternative Hypothesis (H1)• a statement that is the opposite of the null hypothesis, that the difference

exists in reality not simply due to random error• Represents the condition desired

– Null hypothesis is accepted–there is no change to the status quo

– Null hypothesis is rejected–the alternative hypothesis is accepted and the conclusion is that there has been a change in opinions or actions

– Null hypothesis refers to a population parameter–not a sample statistic

Explain hypothesis testing and assess potential error in its useHypothesis Testing

Page 15: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 26

• Statistical Significance

– Inference Regarding a Population

– Type I Error–made by rejecting the null hypothesis when it is true; the probability of alpha (α)

• Level of Significance--.10, .05, or .01

Explain hypothesis testing and assess potential error in its useHypothesis Testing

Page 16: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 27

• Type II Error–failing to reject the null hypothesis when the alternative hypothesis is true; the probability of beta (β).

– Unlike alpha (α), which is specified by the researcher, beta (β) depends on the actual population parameter.

– Type I and Type II errors–sample size can help control these errors

• Can select an alpha (α) and the sample size in order to increase the power of the test and beta (β)

Explain hypothesis testing and assess potential error in its useHypothesis Testing

Page 17: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 28

• Purpose of Inferential Statistics

– Sample– Sample Statistics– Population Parameter

• The actual population parameters are unknown since the cost to perform a census of the population is prohibitive

• Frequency Distribution

Analyzing Relationships of

Sample Data

Page 18: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 29

• Univariate Tests of Significance– involve hypothesis testing using one variable

at a time• z-test

– sample size >30 and the standard deviation is unknown

• t-test–– sample size <30 and the standard deviation is

unknown, assumption of a normal distribution is not valid

Analyzing Relationships of

Sample Data

Understand univariate and bivariate statistical tests

Page 19: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 30

Explain the range and standard deviation of a frequency distribution as measures

of dispersionExhibit 15.11

Page 20: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 31

• Bivariate Hypotheses–where more than one group is involved

• Null hypotheses–that there is no difference between the group means

µ1 = µ2 or µ1 - µ2 = 0

Analyzing Relationships of

Sample Data

Understand univariate and bivariate statistical tests

Page 21: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 32

• Using the t-Test to Compare Two Means

– Univariate t-test and the Bivariate t-test–require interval or ratio data

• t-test –useful when the sample size is < 30 and the population standard deviation is unknown

• Bivariate test—assumption is that the samples are drawn from populations with normal distributions and that the variances of the populations are equal

Analyzing Relationships of

Sample Data

Understand univariate and bivariate statistical tests

Page 22: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 33

• t-test for differences between group means–as the difference between the means divided by the variability of random means

– t-value–ratio of the difference between two sample means and the standard error

– t-test–provides a rational way of determining if the difference between the two sample means occurred by chance.

Analyzing Relationships of

Sample Data

Understand univariate and bivariate statistical tests

Page 23: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 34

• The formula for calculating the t value is _ _

Z = x1 – x2 S x1 – x2

Analyzing Relationships of

Sample Data

Understand univariate and bivariate statistical tests

Page 24: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 35

Explain the range and standard deviation of a frequency distribution as measures

of dispersionExhibit 15.12

Page 25: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 36

Explain the range and standard deviation of a frequency distribution as measures

of dispersionExhibit 15.14

Page 26: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 37

• Analysis of Variance (ANOVA)–statistical technique that determines if three or more means are statistically different from each other

• Multivariate Analysis of Variance (MANOVA)–multiple dependent variables can be analyzed together

Analyzing Relationships of

Sample Data

Apply and interpret the results of the ANOVA and n-way

ANOVA statistical methods

Page 27: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 38

• Requirements for the ANOVA – The dependent variable be either interval or ratio scaled– The independent variable be categorical

• Null hypothesis for ANOVA–states that there is no difference between the groups–the null hypothesis would be

µ1 = µ2 = µ3

• ANOVA technique–focuses on the behavior of the variance with a set of data

• ANOVA–if the calculated variance between the groups is compared to the variance within the groups, a rational determination can be made as to whether the means are significantly different

Analyzing Relationships of

Sample Data

Apply and interpret the results of the ANOVA and n-way

ANOVA statistical methods

Page 28: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 39

• Determining Statistical Significance in ANOVA

– F-test–used to statistically evaluate the differences between the group means in ANOVA

– Total variance–separated into between-group and within-group variance

Analyzing Relationships of

Sample Data

Apply and interpret the results of the ANOVA and n-way

ANOVA statistical methods

Page 29: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 40

• F distribution–ratio of these two components of total variance and can be calculated as follows

– F ratio = Variance between groups Variance within groups

• The larger the F ratio

– The larger the difference in the variance between groups

– Implies significant differences between the groups

– the more likely that the null hypothesis will be rejected

Analyzing Relationships of

Sample Data

Apply and interpret the results of the ANOVA and n-way

ANOVA statistical methods

Page 30: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 41

• ANOVA–cannot identify which pairs of means are significantly different from each other– Follow-up Tests—test that flag the means that

are statistically different from each other• Sheffé• Tukey, Duncan and Dunn

Analyzing Relationships of

Sample Data

Apply and interpret the results of the ANOVA and n-way

ANOVA statistical methods

Page 31: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 42

• n-Way ANOVA

– In a one-way ANOVA–only one independent variable

– For several independent variables–a n-way ANOVA would be used

– Use of experimental designs–provides different groups in a sample with different information to see how their responses change

Analyzing Relationships of

Sample Data

Apply and interpret the results of the ANOVA and n-way

ANOVA statistical methods

Page 32: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 43

Explain the range and standard deviation of a frequency distribution as measures

of dispersionExhibit 15.15

Page 33: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 44

Explain the range and standard deviation of a frequency distribution as measures

of dispersionExhibit 15.16

Page 34: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 45

• MANOVA–designed to examine multiple dependent variables across single or multiple independent variables

– Statistical calculations for MANOVA–similar to n-way ANOVA and are in the statistical software packages such as SPSS and SAS

Analyzing Relationships of

Sample Data

Apply and interpret the results of the ANOVA and n-way

ANOVA statistical methods

Page 35: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 46

• Perceptual Mapping–process that is used to develop maps showing the perceptions of respondents. The maps are visual representations of respondents’ perceptions of a company, product, service, brand, or any other object in two dimensions

– Has a vertical and a horizontal axis that are labeled with descriptive adjectives

– Development of the perceptual map–rankings, mean ratings, and multivariate techniques

Perceptual MappingUtilize perceptual mapping to

simplify presentation of research findings

Page 36: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 47

Explain the range and standard deviation of a frequency distribution as measures

of dispersionExhibit 15.17

Page 37: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 48

Explain the range and standard deviation of a frequency distribution as measures

of dispersionExhibit 15.18

Page 38: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 49

• Applications in Marketing Research

1. New-product development

2. Image measurements

3. Advertising

4. Distribution

Perceptual MappingUtilize perceptual mapping to

simplify presentation of research findings

Page 39: Chapter Fifteen Copyright © 2006 McGraw-Hill/Irwin Data Analysis: Testing for Significant Differences

McGraw-Hill/Irwin 50

• Value of Testing for Differences in Data

• Guidelines for Graphics

• Measures of Central Tendency

• Measures of Dispersion

• Hypothesis Testing

• Analyzing Relationships of Sample Data

• Perceptual Mapping

Summary