12
By Daniel Park and Zach Ney Significance Tests

By Daniel Park and Zach Ney

  • Upload
    norah

  • View
    34

  • Download
    0

Embed Size (px)

DESCRIPTION

Significance Tests. By Daniel Park and Zach Ney. Logic of significance testing. Significance tests are a procedure for comparing observed data with a claim/hypothesis. Null hypothesis is the hypothesis that there is no difference in probability between the observed and expected values. - PowerPoint PPT Presentation

Citation preview

Page 1: By Daniel Park and Zach Ney

By Daniel Park and Zach Ney

Significance Tests

Page 2: By Daniel Park and Zach Ney

• Significance tests are a procedure for comparing observed data with a claim/hypothesis.

• Null hypothesis is the hypothesis that there is no difference in probability between the observed and expected values.

• Alternative hypothesis is the hypothesis that states that there is a difference in probability between the observed and the expected.

• In significance tests we are trying to find evidence against the null hypothesis and accept the alternative.

Logic of significance testing

Page 3: By Daniel Park and Zach Ney

• Step 1: State the parameter. For example if we are testing to see if Shaqs claim that he makes 16/20 or 80 percent of his free throws at the gym yesterday is true given that when he tried it again he only made 8/20. So the parameter in this case would be the true difference in proportions of free throws between the two events.

STEPS FOR MAKING A SIGNIFICANCE TEST

Page 4: By Daniel Park and Zach Ney

• Step 2: Define your null and alternative hypothesis.• You can have either a one sided or two sided alternative

hypothesis one sided means that you are taking one of the tail probabilities and a two sided means that you are taking both probabilities.

• In this case the null hypothesis would be that the proportion of free throws made is .80. or No:P=.8

• The alternative in this case would be the probabilility that it is less that .8 so a one sided test. So Na: P<.8

• If it were a two sided test the Na would be Na: P does not equal .8

STEPS FOR MAKING A SIGNIFICANCE TEST

Page 5: By Daniel Park and Zach Ney

• Step 3 Check your conditions. A. Needs to be an SRS

The problem should state if it is an SRS or notB. Is it normal or approximately normal.

This can be determined by the following formulasProportions N*P> or= 10 and N(1-P)> or = 10Means N>or = 30 or N. or = 15 if there are no outliers or

strong skewness.

C. Is it independent? This is the 10% rule where the sample should be less than 10% of the population.

You do not need to check this condition if it is independent such as births. Knowing the gender of one child at birth does not effect the gender of the next baby being born.

STEPS FOR MAKING A SIGNIFICANCE TEST

Page 6: By Daniel Park and Zach Ney

• Now you need to calculate the actual test the first way is to do it by hand.• Test statistic = Statistic-Parameter/ standard deviation of the statistic.• For proportions: Z= P^ -Po / square root of Po*(1-Po) over N• For means t= x bar – Mo over s over the square root of N• Once you get a Z/T you can use the tables in the back of the book or your

calculator to find your P value. On calc for Z it is Normcdf(min,max) for T it is Normcdf(min,max,df)

• This value will then tell you whether or not you can reject the null or not. If this p value it gives you is smaller than your significance level you can reject the null and accept the alternative. The ways you would say this are.

• When P is less than sig. level We have sufficient evidence to reject the null and can conclude that the alternative.

• When P is greater you say We do not have sufficent evidence to reject the null and cannot conclude Alternative.

STEPS FOR MAKING A SIGNIFICANCE TEST

Page 7: By Daniel Park and Zach Ney

• When P is less than sig. level We have sufficient evidence to reject the null and can conclude that the alternative.

• When P is greater you say We do not have sufficent evidence to reject the null and cannot conclude Alternative.

• For example on the basketball problem if p is less than .05 we would say We have sufficent evidence to reject the null and can conclude that his free throw percentage is indeed less than 80%

• If p is greater than .05 we would say that we do not have sufficient evidence to reject the null and cannot conclude that the percentage is less than 80%

CONCLUSION

Page 8: By Daniel Park and Zach Ney

Extras

• You can perform these tests with different situations such as significance tests for comparing two proportions and so on and so forth. In these cases you follow the same basic four steps.

Page 9: By Daniel Park and Zach Ney

Types of Errors

• Type I Error: Reject H0 when H0 is actually true– “α” is the probability of making a Type I Error

• Type II Error: Fail to reject H0 when H0 is false– “β” is the Probability of making a Type II Error

Page 10: By Daniel Park and Zach Ney

Power of the Test

• Power is the probability that H0 was correctly rejected

• Power = 1 − β – Higher values of Power are better• Lower values suggest a higher chance of a Type II Error

– Larger Sample Size = Higher Power– Standard desired level of Power is 80%– Larger α also increases Power

Page 11: By Daniel Park and Zach Ney
Page 12: By Daniel Park and Zach Ney

Sources

• Chapters 6 and 7 notes• Chapters 6 and 7 from the AP statistics book