Inferential Statistics for b and r

Author(s)

David M. Lane

Prerequisites

Sampling Distribution of r, Confidence Interval for r

Learning Objectives
  1. State the assumptions that inferential statistics in regression are based upon
  2. Identify heteroscedasticity in a scatterplot
  3. Compute the standard error of a slope
  4. Test a slope for significance
  5. Construct a confidence interval on a slope
  6. Test a correlation for significance
  7. Construct a confidence interval on a correlation

This section shows how to conduct significance tests and compute confidence intervals for the regression slope and Pearson's correlation. As you will see, if the regression slope is significantly different from zero, then the correlation coefficient is also significantly different from zero.

Assumptions

Although no assumptions were needed to determine the best-fitting straight line, assumptions are made in the calculation of inferential statistics. Naturally, these assumptions refer to the population not the sample.

  1. Linearity: The relationship between the two variables is linear.
  2. Homoscedasticity: The variance around the regression line is the same for all values of X.
  3. The errors of prediction are distributed normally. This means that the distributions of deviations from the regression line are normally distributed. It does not mean that X or Y is normally distributed.

Significance Test for the Slope (b)

Recall the general formula for a t test

As applied here, the statistic is the sample value of the slope (b) and the hypothesized value is 0. The degrees of freedom for this test are:

df = N-2

where N is the number of pairs of scores.

The estimated standard error of b is computed using the following formula:

where sb is the estimated standard error of b, sest is the standard error of the estimate. SSX is the sum of squared deviations of X from the the mean of X. It is calculated as

where Mx is the mean of X. As shown previously, the standard error of the estimate can be calculated as


 

Confidence Interval for the Slope

The method for computing a confidence interval for the population slope is very similar to methods for computing other confidence interval. For the 95% confidence interval the formula is:

lower limit: b - (t.95)(sb)
upper limit: b + (t.95)(sb)

where t.95 is the value of t to use for the 95% confidence interval.

The values of t to be used in a confidence interval can be looked up in a table of the t distribution. A small version of such a table is shown in Table 2. The first column, df, stands for degrees of freedom.

Table 2. Abbreviated t table.
df 0.95 0.99
2
4.303 9.925
3
3.182 5.841
4
2.776 4.604
5
2.571 4.032
8
2.306 3.554
10
2.228 3.169
20
2.086 2.845
50
2.009 2.678
100
1.984 2.626

You can also use the "inverse t distribution" calculator to find the t values to use in confidence interval.

Applying these formulas to the example data,

lower limit: 0.425 - (3.182)(0.305)= -0.55
upper limit: 0.425 + (3.182)(0.305)= 1.40

Significance Test for the Correlation

The formula for a significance test of Pearson's correlation is shown below:

where N is the number of paris of scores. For the example data,

Notice that this is the same t value obtained in the test of t b. As in that test the degrees of freedom is N-2 = 3.

Confidence Interval for the Correlation

There are several steps in computing a confidence interval on ρ (the population value of Pearson's r). Recall from the chapter on sampling distributions that:

    1. The sampling distribution of Pearson's r is skewed.
    2. Fisher's z' transformation of r is normal.
    3. z' = 0.5 ln[(1+r)/(1-r)].
    4. z' has a standard error of .

The calculation of the confidence interval involves the following steps:

    1. Converting r to z'. For our example data, the r of 0.627 is transformed to a z' 0.736. This can be done using the formula above or the r to z' calculator.
    2. Find the standard error of z' (sz'). For our example, N = 5 so sz' = 0.707.
    3. Compute the confidence interval in terms of z' using the formula
      lower limit = z' - (z.95)(sz')
      upper limit = z' + (z.95)(sz')

      For the example,
    4. lower limit = 0.736 - (1.96)(0.707) = -0.650
      upper limit = 0.736 + (1.96)(0.707) =  2.122

    5. Convert the interval for z' back to Pearson's correlation. This can be done with the r to z' calculator.
      For the example,
    6. lower limit = -0.57
      upper limit =  0.97

      The interval is so wide because the sample size is so small.
Please answer the questions:
correct feedback