Upload
vothuan
View
250
Download
2
Embed Size (px)
A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution;
The GLM with the Normal Distribution
PSYC 943 (930): Fundamentals of Multivariate Modeling
Lecture 4: September 5, 2012
PSYC 943: Lecture 4
Todayβs Class
β’ The building blocks: The basics of mathematical statistics: Random variables: definitions and types Univariate distributions
General terminology Univariate normal (aka, Gaussian) Other popular (continuous) univariate distributions
Types of distributions: marginal, conditional, and joint Expected values: means, variances, and the algebra of expectations Linear combinations of random variables
β’ The finished product: How the GLM fits within statistics The GLM with the normal distribution The statistical assumptions of the GLM How to assess these assumptions
PSYC 943: Lecture 4 2
RANDOM VARIABLES AND STATISTICAL DISTRIBUTIONS
PSYC 943: Lecture 4 3
Random Variables
Random: situations in which the certainty of the outcome is unknown and is at least in part due to chance
+ Variable: a value that may change given the scope of a given problem or set of operations
= Random Variable: a variable whose outcome depends on chance (possible values might represent the possible outcomes of a yet-to-be-performed experiment) Today we will denote a random variable with a lower-cased: π₯
PSYC 943: Lecture 4 4
Types of Random Variables
β’ Random variables have different types: 1. Continuous
Examples of continuous random variables: π₯ represents the height of a person, drawn at random ππ (the outcome/DV in a GLM)
2. Discrete (also called categorical, generally) Example of discrete:
π₯ represents the gender of a person, drawn at random
3. Mixture of Continuous and Discrete: Example of mixture:
π₯ represents οΏ½response time (if between 0 and 45 seconds)0
PSYC 943: Lecture 4 5
Key Features of Random Variables
β’ Random variables each are described by a probability density/mass function (PDF) π(π₯) that indicates relative frequency of occurrence A PDF is a mathematical function that gives a rough picture of the
distribution from which a random variable is drawn
β’ The type of random variable dictates the name and nature of these functions: Continuous random variables:
π(π₯) is called a probability density function Area under curve must equal 1 (found by calculus β integration) Height of curve (the function value π π₯ ):
β Can be any positive number β Reflects relative likelihood of an observation occurring
Discrete random variables:
π(π₯) is called a probability mass function Sum across all values must equal 1 The function value π(π₯) is a probability (so must range from 0 to 1)
PSYC 943: Lecture 4 6
Other Key Terms
β’ The sample space is the set of all values that a random variable π₯ can take: The sample space for a random variable π₯ from a normal distribution
(π₯ βΌ π ππ₯,ππ₯2 ) is ββ,β (all real numbers)
The sample space for a random variable π₯ representing the outcome of a coin flip is π»,π
The sample space for a random variable π₯ representing the outcome of a roll of a die is {1, 2, 3, 4, 5, 6}
β’ When using generalized models (discussed next week), the trick is to pick a distribution with a sample space that matches the range of values obtainable by data
PSYC 943: Lecture 4 7
Uses of Distributions in Data Analysis
β’ Statistical models make distributional assumptions on various parameters and/or parts of data
β’ These assumptions govern: How models are estimated How inferences are made How missing data may be imputed
β’ If data do not follow an assumed distribution, inferences
may be inaccurate Sometimes a very big problem, other times not so much
β’ Therefore, it can be helpful to check distributional assumptions
prior to (or while) running statistical analyses We will do this at the end of class
PSYC 943: Lecture 4 8
CONTINUOUS UNIVARIATE DISTRIBUTIONS
PSYC 943: Lecture 4 9
Continuous Univariate Distributions
β’ To demonstrate how continuous distributions work and look, we will discuss three: Uniform distribution
Normal distribution
Chi-square distribution
β’ Each are described a set of parameters, which we will later see are what give us our inferences when we analyze data
β’ What we then do is put constraints on those parameters based on hypothesized effects in data
PSYC 943: Lecture 4 10
Uniform Distribution
β’ The uniform distribution is shown to help set up how continuous distributions work
β’ For a continuous random variable π₯ that ranges from π, π , the uniform probability density function is:
π π₯ =1
π β π
β’ The uniform distribution has
two parameters: π β the lower limit π β the upper limit
β’ π₯ βΌ π π, π
PSYC 943: Lecture 4 11
More on the Uniform Distribution
β’ To demonstrate how PDFs work, we will try a few values:
β’ The uniform PDF has the feature that all values of π₯ are equally likely across the sample space of the distribution Therefore, you do not see π₯ in the PDF π π₯
β’ The mean of the uniform distribution is 12
(π + π)
β’ The variance of the uniform distribution is 112
π β π 2
π π π π(π)
.5 0 1 11 β 0
= 1
.75 0 1 11 β 0
= 1
15 0 20 120 β 0
= .05
15 10 20 120 β 10 = .1
PSYC 943: Lecture 4 12
Univariate Normal Distribution β’ For a continuous random variable π₯ (ranging from ββ to β) the
univariate normal distribution function is:
π π₯ =1
2πππ₯2exp β
π₯ β ππ₯ 2
2ππ₯2
β’ The shape of the distribution is governed by two parameters:
The mean ππ₯ The variance ππ₯2 These parameters are called sufficient statistics (they contain all the
information about the distribution)
β’ The skewness (lean) and kurtosis (peakedness) are fixed
β’ Standard notation for normal distributions is π₯ βΌ π(ππ₯ ,ππ₯2) Read as: βπ₯ follows a normal distribution with a mean ππ₯ and a variance ππ₯2β
β’ Linear combinations of random variables following normal distributions
result in a random variable that is normally distributed Youβll see this later with respect to the GLMβ¦
PSYC 943: Lecture 4 13
Univariate Normal Distribution
π(π₯)
π π₯ gives the height of the curve (relative frequency) for any value of π₯, ππ₯, and ππ₯2
PSYC 943: Lecture 4 14
More of the Univariate Normal Distribution
β’ To demonstrate how the normal distribution works, we will try a few values:
β’ The values from π π₯ were obtained by using Excel The β=normdist()β function Most statistics packages have a normal distribution function
β’ The mean of the normal distribution is ππ₯ β’ The variance of the normal distribution is ππ₯2
π ππ πππ π(π)
.5 0 1 0.352
.75 0 1 0.301
.5 0 5 0.079
.75 -2 1 0.009
-2 -2 1 0.399
PSYC 943: Lecture 4 15
Chi-Square Distribution
β’ Another frequently used univariate distribution is the Chi-Square distribution Sampling distribution of the variance follows a chi-square distribution Likelihood ratios follow a chi-square distribution
β’ For a continuous random variable π₯ (ranging from 0 to β), the chi-square distribution is given by:
π π₯ =1
2π2Ξ π
2π₯π2β1 exp β
π₯2
β’ Ξ β is called the gamma function β’ The chi-square distribution is governed by one parameter: π
(the degrees of freedom) The mean is equal to π; the variance is equal to 2π
PSYC 943: Lecture 4 16
(Univariate) Chi-Square Distribution
π₯
π(π₯)
PSYC 943: Lecture 4 17
MARGINAL, JOINT, AND CONDITIONAL DISTRIBUTIONS
PSYC 943: Lecture 4 18
Moving from One to Multiple Random Variables β’ When more than one random variable is present, there are several
different types of statistical distributions:
β’ We will first consider two discrete random variables: π₯ is the outcome of the flip of a penny (π»π,ππ)
π π₯ = π»π = .5 ; π π₯ = ππ = .5 π§ is the outcome of the flip of a dime π»π ,ππ
π π§ = π»π = .5 ; π π§ = ππ = .5
β’ We will consider the following distributions: Marginal distribution
The distribution of one variable only (either π(π₯) or π(π§))
Joint distribution π(π₯, π§): the distribution of both variables (both π₯ and π§)
Conditional distribution
The distribution of one variable, conditional on values of the other: β π π₯ π§ : the distribution of π₯ given π§ β π π§ π₯ : the distribution of π§ given π₯
PSYC 943: Lecture 4 19
Marginal Distributions
β’ Marginal distributions are what we have worked with exclusively up to this point: they represent the distribution of one variable by itself Continuous univariate distributions:
Uniform Normal Chi-square
Categorical distributions in our example:
The flip of a penny π π₯ The flip of a dime π(π§)
PSYC 943: Lecture 4 20
Joint Distributions β’ Joint distributions describe the distribution of more than one variable,
simultaneously Representations of multiple variables collected
β’ Commonly, the joint distribution function is denoted with all random
variables separated by commas In our example, π π₯, π§ is the joint distribution of the outcome of flipping both a
penny and a dime As both are discrete, the joint distribution has four possible values: π π₯ = π»π, π§ = π»π ,π π₯ = π»π, π§ = ππ ,π π₯ = ππ, π§ = π»π ,π π₯ = ππ, π§ = ππ
β’ Joint distributions are multivariate distributions
We will cover the continuous versions of these in a few weeks The multivariate normal distribution
For our purposes, we will use joint distributions to introduce two topics
Joint distributions of independent variables Joint likelihoods (next class) β used in maximum likelihood estimation
PSYC 943: Lecture 4 21
Joint Distributions of Independent Random Variables
β’ Random variables are said to be independent if the occurrence of one event makes it neither more nor less probable of another event For joint distributions, this means: π π₯, π§ = π π₯ π π§
β’ In our example, flipping a penny and flipping a dime are
independent β so we can complete the following table of their joint distribution:
π = π―π π = π»π
π₯ = π»π π π₯ = π»π, π§ = π»π π π₯ = π»π, π§ = ππ π(π₯ = π»π)
π₯ = ππ π π₯ = ππ, π§ = π»π π π₯ = ππ, π§ = ππ π(π₯ = ππ)
π π§ = π»π π(π§ = ππ)
Penny
Dime
Marginal (Penny)
Marginal (Dime)
Joint (Penny, Dime)
PSYC 943: Lecture 4 22
Joint Distributions of Independent Random Variables
β’ Because the coin flips are independent, this becomes:
β’ Then, with numbers:
π = π―π π = π»π
π₯ = π»π π π₯ = π»π)π(π§ = π»π π π₯ = π»π)π(π§ = ππ π(π₯ = π»π)
π₯ = ππ π π₯ = ππ)π(π§ = π»π π π₯ = ππ)π(π§ = ππ π(π₯ = ππ)
π π§ = π»π π(π§ = ππ)
Penny
Dime
Marginal (Penny)
Marginal (Dime)
Joint (Penny, Dime)
π = π―π π = π»π
π₯ = π»π .25 .25 .5
π₯ = ππ .25 .25 .5
.5 .5
Penny
Dime
Marginal (Penny)
Marginal (Dime)
Joint (Penny, Dime)
PSYC 943: Lecture 4 23
Marginalizing Across a Joint Distribution
β’ If you had a joint distribution, π(π₯, π§), but wanted the marginal distribution of either variable (π π₯ or π(π§)) you would have to marginalize across one dimension of the joint distribution
β’ For categorical random variables, marginalize = sum across
π π₯ = οΏ½π π₯, π§π§
For example π π₯ = π»π = π π₯ = π»π, π§ = π»π + π π₯ = π»π, π§ = ππ = .5
β’ For continuous random variables, marginalize = integrate across
No integration needed from you β just a conceptual understanding Here, the integral = an eraser!
π π₯ = οΏ½ π π₯, π§ ππ§π§
PSYC 943: Lecture 4 24
Conditional Distributions
β’ For two random variables π₯ and π§, a conditional distribution is written as: π π§ π₯ The distribution of π§ given π₯
β’ The conditional distribution is also equal to the joint distribution divided by the marginal distribution of the conditioning random variable
π π§ π₯ =π(π§, π₯)π(π₯)
β’ Conditional distributions are found everywhere in statistics
As we will see, the general linear model uses the conditional distribution of the dependent variable (where the independent variables are the conditioning variables)
PSYC 943: Lecture 4 25
Conditional Distributions
β’ For discrete random variables, the conditional distribution is fairly easy to show:
π = π―π π = π»π
π₯ = π»π .25 .25 .5
π₯ = ππ .25 .25 .5
.5 .5
Penny
Dime
Marginal (Penny)
Marginal (Dime)
Joint (Penny, Dime)
Conditional: π π π = π―π :
π π§ = π»π π₯ = π»π =π π§ = π»π , π₯ = π»π
π π₯ = π»π=
.25.5 = .5
π π§ = ππ π₯ = π»π =π π§ = ππ , π₯ = π»π
π π₯ = π»π=
.25.5 = .5 We will show a continuous
conditional distribution with the GLM in a few slides
PSYC 943: Lecture 4 26
EXPECTED VALUES AND THE ALGEBRA OF EXPECTATIONS
PSYC 943: Lecture 4 27
Expected Values
β’ Expected values are statistics taken the sample space of a random variable: they are essentially weighted averages
β’ The weights used in computing this average correspond to the probabilities (for a discrete random variable) or to the densities (for a continuous random variable).
β’ Notation: the expected value is represented by: πΈ(π₯) The actual statistic that is being weighted by the PDF is put into the
parentheses where π₯ is now
β’ Expected values allow us to understand what a statistical model implies about data, for instance: How a GLM specifies the (conditional) mean and variance of a DV
PSYC 943: Lecture 4 28
Expected Value Calculation
β’ For discrete random variables, the expected value is found by:
πΈ π₯ = οΏ½π₯π₯(π = π₯)π₯
β’ For example, the expected value of a roll of a die is:
πΈ π₯ = 116
+ 216
+ 316
+ 416
+ 516
+ 616
= 3.5
β’ For continuous random variables, the expected value is found by:
πΈ π₯ = οΏ½ π₯π π₯ ππ₯π₯
β’ We wonβt be calculating theoretical expected values with
calculusβ¦we use them only to see how models imply things about our data
PSYC 943: Lecture 4 29
Variance and Covarianceβ¦As Expected Values
β’ A distributionβs theoretical variance can also be written as an expected value:
π π₯ = πΈ π₯ β πΈ π₯ 2 = πΈ π₯ β ππ₯ 2 This formula will help us understand predictions made GLMs and how that
corresponds to statistical parameters we interpret
β’ For a roll of a die, the theoretical variance is:
π π₯ = πΈ π₯ β 3.5 2 = 16
1 β 3.5 2 + 16
2 β 3.5 2 + 16
3 β 3.5 2 +16
4 β 3.5 2 + 16
5 β 3.5 2 + 16
6 β 3.5 2 = 2.92 Likewise, the SD is then 2.92 = 1.71
β’ Likewise, for a pair of random variables π₯ and π§, the covariance can
be found from their joint distributions: πΆπΆπΆ π₯, π§ = πΈ π₯π§ β πΈ π₯ πΈ π§ = πΈ π₯π§ β ππ₯ππ§
PSYC 943: Lecture 4 30
LINEAR COMBINATIONS OF RANDOM VARIABLES
PSYC 943: Lecture 4 31
Linear Combinations of Random Variables
A linear combination is an expression constructed from a set of terms by multiplying each term by a constant and then adding the results
π₯ = π1πΆ1 + π2πΆ2 + β―+ πππΆπ
The linear regression equation is a linear combination
β’ More generally, linear combinations of random variables have specific implications for the mean, variance, and possibly covariance of the new random variable
β’ As such, there are predicable ways in which the means, variances, and covariances change These terms are called the algebra of expectations
β’ To guide us through this process, we will use the descriptive
statistics from the height/weight/gender example from our 1st class PSYC 943: Lecture 4 32
Descriptive Statistics for Height/Weight Data Variable Mean SD Variance
Height 67.9 7.44 55.358
Weight 183.4 56.383 3,179.095
Female 0.5 0.513 0.263
PSYC 943: Lecture 4 33
Correlation /Covariance
Height Weight Female
Height 55.358 334.832 -2.263
Weight .798 3,179.095 -27.632
Female -.593 -.955 .263
Below Diagonal: Correlation
Above Diagonal: Covariance Diagonal: Variance
Algebra of Expectations
Here are some properties of expected values (true for any type of random variable): π₯ and π§ are random variables, π and π constants Sums of Constants:
πΈ π₯ + π = πΈ π₯ + π π π₯ + π = π π₯
πΆπΆπΆ π₯ + π, π§ = πΆπΆπΆ(π₯, π§) Products of Constants:
πΈ ππ₯ = ππΈ π₯ π ππ₯ = π2π π₯
πΆπΆπΆ ππ₯,ππ§ = πππΆπΆπΆ(π₯, π§) Sums of Random Variables:
πΈ ππ₯ + ππ§ = ππΈ π₯ + ππΈ π§ π ππ₯ + ππ§ = π2π π₯ + π2π π§ + 2ππ πΆπΆπΆ π₯, π§
PSYC 943: Lecture 4 34
Examples for Algebra of Expectations β’ Image you wanted to convert weight from pounds to kilograms (where 1
pound = 0.453 kg) ππππππππ = .453ππππππππ
β’ The mean (expected value) of weight in kg: πΈ ππππππππ = πΈ .453ππππππππ = .453πΈ ππππππππ = .453ππππππππ
= .453 β 183.4 = 83.08kg
β’ The variance of weight in kg: π ππππππππ = π .453ππππππππ =.4532 π ππππππππ
=.4532β 3,179.095 = 652.38ππ2 β’ The covariance of weight in kg with height in inches:
πΆπΆπΆ ππππππππ,π»πππππ = πΆπΆπΆ .453ππππππππ ,π»πππππ= .453πΆπΆπΆ ππππππππ ,π»πππππ = .453 β 334.832= 151.68kg β inches
PSYC 943: Lecture 4 35
Donβt Take My Word For Itβ¦
SAS syntax for transforming weight in a DATA step: SAS syntax for marginal descriptive statistics and covariances: SAS output:
PSYC 943: Lecture 4 36
Where We Use Thisβ¦The Dreaded ESTIMATE Statement
β’ The ESTIMATE statement in SAS computes the expected value and standard error (square root of variance) for a new random variable The new random variable is a linear combination of the original model
parameters (the fixed effects) The original model parameters are considered βrandomβ here as their
sampling distribution is used (assuming normal errors and a large N)
πΈπΈπππΈπππ = 1 β π½ππ₯πππππππππ + 1 β π½πΊ2βππ₯πππππππππ
β’ Where: π½ππ₯πππππππππ has mean π½ππ₯ππππππππποΏ½ and variance πΈπ π½ππ₯ππππππππποΏ½ 2
π½πΊ2βππ₯πππππππππhas mean π½πΊ2βππ₯ππππππππποΏ½ and variance πΈπ π½πΊ2βππ₯ππππππππποΏ½ 2
There exists a covariance between π½ππ₯ππππππππποΏ½ and π½πΊ2βππ₯ππππππππποΏ½ Weβll call this πΆπΆπΆ π½ππ₯ππππππππποΏ½ , π½πΊ2βππ₯ππππππππποΏ½
PSYC 943: Lecture 4 37
More ESTIMATE Statement Fun
β’ Soβ¦if the estimates are:
And πΆπΆπΆ π½ππ₯ππππππππποΏ½ , π½πΊ2βππ₯ππππππππποΏ½ = β.08756
β¦What is: πΈ πΈπΈπππΈπππ = πΈ 1 β π½ππ₯πππππππππ + 1 β π½πΊ2βππ₯πππππππππ
= 1 β πΈ π½ππ₯πππππππππ + 1 β πΈ π½πΊ2βππ₯πππππππππ = β.385 β .631 = β1.016 π πΈπΈπππΈπππ = π 1 β π½ππ₯πππππππππ + 1 β π½πΊ2βππ₯πππππππππ
= 12π π½ππ₯πππππππππ + 12π π½πΊ2βππ₯πππππππππ + 2 β 1β 1πΆπΆπΆ π½ππ₯πππππππππ,π½πΊ2βππ₯πππππππππ =
.2962 +.3912 β2 β .08756 = .0653
πΈπ πΈπΈπππΈπππ = π πΈπΈπππΈπππ = .257 PSYC 943: Lecture 4 38
THE GENERAL LINEAR MODEL WITH WHAT WE HAVE LEARNED TODAY
PSYC 943: Lecture 4 39
The General Linear Model, Revisited
β’ The general linear model for predicting Y from X and Z: ππ = π½0 + π½1ππ + π½2ππ + π½3ππππ + ππ
In terms of random variables, under the GLM: β’ ππ is considered random: ππ βΌ π 0,ππ2
β’ ππ is dependent on the linear combination of ππ,ππ, and ππ
β’ The GLM provides a model for the conditional distribution of the
dependent variable, where the conditioning variables are the independent variables: π(ππ|ππ,ππ) There are no assumptions made about ππ and ππ - they are constants The regression slopes π½0,π½1,π½2,π½3 are constants that are said to be fixed at
their values (hence, called fixed effects)
PSYC 943: Lecture 4 40
Combining the GLM with Expectations
β’ Using the algebra of expectations predicting Y from X and Z: The expected value (mean) of π ππ πΏπ,ππ :
ποΏ½π = πΈ ππ = πΈ π½0 + π½1ππ + π½2ππ + π½3ππππ + ππ
= π½0 + π½1ππ + π½2ππ + π½3ππππ + πΈ ππ= π½0 + π½1ππ + π½2ππ + π½3ππππ
The variance of π ππ πΏπ,ππ :
π ππ = π π½0 + π½1ππ + π½2ππ + π½3ππππ + ππ = π ππ = ππ2
Constants Random Variable with πΈ ππ = 0
PSYC 943: Lecture 4 41
Distribution of π ππ ππ,ππ
β’ We just found the mean (expected value) and variance implied by the GLM for the conditional distribution of ππ given ππ and ππ
β’ The next question: what is the distribution of π ππ ππ,ππ ?
β’ Linear combinations of random variables that are normally distributed result in variables that are normally distributed
β’ Because ππ βΌ π 0,ππ2 is the only random term in the GLM, the resulting conditional distribution of ππ is normally distributed:
ππ βΌ π π½0 + π½1ππ + π½2ππ + π½3ππππ,ππ2
Model for the means: from fixed effects; literally gives mean of π ππ ππ,ππ
Model for the variances: from random effects; gives variance of π ππ ππ,ππ
PSYC 943: Lecture 4 42
Examining What This Means in the Context of Data
β’ If you recall from our first lecture, the final model we decided to interpret: Model 5
ππ = π½0 + π½1 π»π β π»οΏ½ + π½2πΉπ + π½3 π»π β π»οΏ½ πΉπ + ππ where ππ βΌ π 0,ππ2 β’ From SAS:
PSYC 943: Lecture 4 43
Picturing the GLM with Distributions
The distributional assumptions of the GLM are the reason why we do not need to worry if our dependent variable is normally distributed Our dependent variable should be conditionally normal We can check this assumption by checking our assumption about the residuals, ππ βΌ π 0,ππ2
PSYC 943: Lecture 4 44
More Pictures of the GLM
β’ Treating our estimated values of the slopes π½0,π½1,π½2,π½3 and the residual variance ππ2 as the true values* we can now see what the theoretical* distribution of π πππππππ π»ππππππ,πΉππΈππΉππ looks like for a given set of predictors
*Note: these distributions change when sample estimates are used (think standard error of the prediction)
PSYC 943: Lecture 4 45
Behind the Pictures⦠⒠To emphasize the point that PDFs provide the height of the line, here is the normal
PDF (with numbers) that produced those plots:
π ππ π»π,πΉπ =1
2πππ2exp β
ππ βποΏ½π2
2ππ2
=1
2πππ2exp β
ππ β π½0 + π½1 π»π β π»οΏ½ + π½2πΉπ + π½3 π»π β π»οΏ½ πΉπ2
2ππ2
=1
2π 4.73exp β
ππ β 222.18 + 3.19 π»π β π»οΏ½ β 82.27πΉπ β 1.09 π»π β π»οΏ½ πΉπ2
2 4.73
The plots were created using the following value for the predictors: π»οΏ½ = 67.9 Left plot: π»π = 62;πΉπ = 1 Right plot: π»π = 76;πΉπ = 0
Model for the Means
Model for the Variance
PSYC 943: Lecture 4 46
ASSESSING UNIVARIATE NORMALITY IN SAS
PSYC 943: Lecture 4 47
Assessing Univariate Normality in SAS
β’ The assumption of normally distributed residuals permeates GLM Good news: of all the distributional assumptions, this seems to be the least
damaging to violate. GLMs are robust to violations of normality.
β’ Methods exist to examine residuals from an analysis and thereby determine the adequacy of a model Graphical methods: Quantile-Quantile plots (from PROC GLM) Hypothesis tests (from PROC UNIVARIATE)
β’ Both approaches have problems Graphical methods do not determine how much deviation is by chance Hypothesis tests become overly sensitive to small deviations when sample
size is large (have great power)
β’ To emphasize how distributions work, we will briefly discuss both
PSYC 943: Lecture 4 48
Assessing Distributional Assumptions Graphically
β’ A useful tool to evaluate the plausibility of a distributional assumption is that of the Quantile versus Quantile Plot (more commonly called a Q-Q plot)
β’ A Q-Q plot is formed by comparing the observed quantiles of a variable with that of a known statistical distribution A quantile is the particular ordering of a given observation In our data, a person with a height of 71 is the 39th tallest person (out of 50) This would correspond to the person being at the 39β.5
50= .77 or .77
percentile of the distribution (taller than 77% of the distribution) The Q-Q plot then converts the percentile to a quantile using the sample
mean and variance A quantile is the value of an observation at the 77th percentile
β’ If the data deviate from a straight line, the data are not likely to
follow from that theoretical distribution
PSYC 943: Lecture 4 49
Q-Q Plots of GLM Residuals
β’ When you turn on the ODS Graphics, SAS PROC GLM will provide Q-Q plots for you with one command (PLOTS = ):
If residuals are normally distributed, they will fall on the line
PSYC 943: Lecture 4 50
Example Q-Q Plot of Non-Normal Data
PSYC 943: Lecture 4 51
Hypothesis Tests for Normality
β’ Additionally, using PROC UNIVARIATE, you can receive up to four hypothesis tests for testing π»0: Data come from normal distribution
Syntax for saving residuals in PROC GLM
Use new data set in PROC UNIVARIATE
PROC UNIVARIATE Output:
If a given test is significant, then it is saying that your data do not come from a normal distribution
In practice, test will give diverging information quite frequently: the best way to evaluate normality is to consider both plots and tests (approximate = good)
PSYC 943: Lecture 4 52
WRAPPING UP
PSYC 943: Lecture 4 53
Wrapping Up Todayβs Class
β’ Today was an introduction to mathematical statistics as a way to understand the implications statistical models make about data
β’ Although many of these topics do not seem directly relevant, they help provide insights that untrained analysts may not easily attain They also help you to understand when and when not to use a model!
β’ We will use many of these same tools in our next class: Estimation of GLMs:
Then (Least Squares) and Now (Maximum Likelihood)
PSYC 943: Lecture 4 54