Inferential Statistics for b and r
Prerequisites
Sampling
Distribution of r, Confidence Interval for r
Learning Objectives
 State the assumptions that inferential statistics in regression are
based upon
 Identify heteroscedasticity in a scatterplot
 Compute the standard error of a slope
 Test a slope for significance
 Construct a confidence interval on a slope
 Test a correlation for significance
 Construct a confidence interval on a correlation
This section shows how to conduct significance
tests and compute confidence intervals for the regression slope
and Pearson's correlation. As you will see, if the regression
slope is significantly different from zero, then the correlation
coefficient is also significantly different from zero.
Assumptions
Although no assumptions were needed to determine
the bestfitting straight line, assumptions are made in the calculation
of inferential statistics. Naturally, these assumptions refer
to the population not the sample.
 Linearity: The relationship between the two variables is linear.
 Homoscedasticity: The variance around the regression line
is the same for all values of X.
A clear violation of this assumption is shown in Figure 1. Notice that the predictions
for students with high highschool GPA's is very good whereas
the prediction for students with low highschool GPA's is not
very good. In other words, the points for students with high
highschool GPA's are close to the regression line whereas the
points for low highschool GPA students do not.
 The errors of prediction are distributed normally. This means
that the distributions of deviations from the regression line
are normally distributed. It does not mean that X or Y is normally
distributed.
Significance Test for the Slope (b)
Recall the general formula for a t test
As applied here, the statistic is the sample value
of the slope (b) and the hypothesized value is 0. The degrees
of freedom for this test are:
df = N2
where N is the number of pairs of scores.
The
estimated standard error of b is computed using the following
formula:
where sb is the estimated
standard error of b, sest is the standard
error of the estimate. SSX is the sum of squared deviations of
X from the the mean of X. It is calculated as
where
Mx is the mean of X. As shown previously, the standard error of
the estimate can be calculated as
These formulas are illustrated with the data shown
in Table 1. These data are reproduced
from the introductory section. The column
X, has the values of the predictor variable and
the column Y has the criterion variable.
The third column, x, contains the the differences between the
column X and the mean of X. The fourth
column, x2, is the square of
the x column. The fifth column, y, contains the
the differences between the column Y and the mean of Y. The last
column, y2, is simply the square of the
y column.
Table 1. Example data. 

X 
Y 
x 
x2 
y 
y2 

1.00
2.00
3.00
4.00
5.00 
1.00
2.00
1.30
3.75
2.25

2.00
1.00
0.00
1.00
2.00

4
1
0
1
4 
1.06
0.06
0.76
1.69
0.19 
1.1236
0.0036
0.5776
2.8561
0.0361 
sum 
15.00 
10.30 
0.00 
10.00 
0.00 
4.5970 

The computations of the standard
error of the estimate (se) for these
data is shown on the section on the standard error of the estimate. It
is equal to 0.964.
se = 0.964
SSX is the sum of squared deviations from the mean
of X. It is therefore equal to the sum of the x2 column and is
equal to 10.
SSX = 10.00
We now have all the information to compute the
standard error of b:
As shown previously, the slope (b) is 0.425. Therefore,
df = N2=52=3.
The p value for a twotailed test is 0.26. Therefore,
the slope is not significantly different from 0.
Confidence Interval for the
Slope
The method for computing a confidence interval
for the population slope is very similar to methods for computing
other confidence interval. For the 95% confidence interval the
formula is:
lower limit: b  (t.95)(sb)
upper limit:
b + (t.95)(sb)
where t.95 is the value of t to use for the
95% confidence interval.
The values of t to be used in a confidence interval
can be looked up in a table of the t distribution. A small version
of such a table is shown in Table 2. The first column, df, stands
for degrees of freedom.
You can also use the "inverse
t distribution" calculator to find the t values to
use in confidence interval.
Applying these formulas to the example data,
lower limit: 0.425  (3.182)(0.305)=
0.55
upper limit: 0.425 + (3.182)(0.305)=
1.40
Significance Test for the Correlation
The formula for a significance test of Pearson's
correlation is shown below:
where N is the number of paris of scores. For
the example data,
Notice that this is the same t value obtained
in the test of t b. As in that test the degrees of freedom is
N2 = 3.
Confidence Interval for the Correlation
There are several steps in computing a confidence
interval on r (the population value
of Pearson's r). Recall from the chapter on sampling distributions
that:
 The sampling distribution of Pearson's r is skewed.
 Fisher's z' transformation of r is normal.
 z' = 0.5 ln[(1+r)/(1r)].
 z' has a standard error of .
The
calculation of the confidence interval involves the following
steps:
 Converting r to z'. For our example data, the r of 0.627
is transformed to a z' 0.736. This can be done using the formula
above or the r
to z' calculator.
 Find the standard error of z' (sz').
For our example, N = 5 so sz' = 0.707.
 Compute the confidence interval in terms of z' using the
formula
lower limit = z'  (z.95)(sz')
upper limit = z' + (z.95)(sz')
For the example,
lower limit = 0.736  (1.96)(0.707)
= 0.650
upper limit = 0.736 + (1.96)(0.707) = 2.122
 Convert the interval for z' back to Pearson's correlation.
This can be done with the r
to z' calculator.
For the example,
lower limit = 0.57
upper limit = 0.97
The interval is so wide because the sample size is so small.
