Type I and II Errors
Prerequisites
Introduction
to Hypothesis Testing, Statistical
Significance
Learning Objectives
- Define Type I and Type II errors
- Interpret significant and non-significant differences
- Explain why the null hypothesis should not be rejected when the effect
is not significant
In the Physicians'
Reactions case study, the probability
value associated with the significance test is 0.0057. Therefore,
the null hypothesis was rejected,
and it was concluded that physicians intend to spend less time
with obese patients. Despite the low probability value, it is
possible that that the null hypothesis of no true difference between
obese and average-weight patients is true and that the large difference
between sample means occurred by chance. If this is the case,
then the conclusion that physicians intend to spend less time
with obese patients is in error. This type of error is called
a Type I error. More generally, a Type
I error occurs when a significance test results in the rejection
of a true null hypothesis.
By one common convention, if the probability value
is below 0.05 then the null hypothesis is rejected. Another
convention, although slightly less common, is to reject the
null hypothesis if the probability value is below 0.01. The
threshold for rejecting the null hypothesis is called the α level
or simply α. It is also called the significance
level. As discussed in the introduction to hypothesis
testing, it is better to interpret the probability value
as an indication of the weight of evidence against the null
hypothesis than as part of a decision rule for making a reject
or do-not-reject decision. Therefore, keep in mind that rejecting
the null hypothesis is not an all-or-nothing decision.
The Type I error rate is affected by the α
level: the lower the α level the lower the Type I error
rate. It might seem that α is the probability of a Type
I error. However, this is not correct. Instead, α is the
probability of a Type I error given that the null hypothesis is
true. If the null hypothesis is false, then it is impossible to
make a Type I error.
The second type of error that can be made in significance
testing is failing to reject a false null hypothesis. This kind
of error is called a Type II error.
Unlike a Type I error, a Type II error is not really an error.
When a statistical test is not significant, it means that the
data do not provide strong evidence that the null hypothesis is
false. Lack of significance does not support the conclusion that
the null hypothesis is true. Therefore, a researcher would not
make the mistake of incorrectly concluding that the null hypothesis
is true when a statistical test was not significant. Instead,
the researcher would consider the test inconclusive. Contrast
this with a Type I error in which the researcher erroneously concludes
that the null hypothesis is false when, in fact, it is true.
A Type II error can only occur if the null hypothesis
is false. If the null hypothesis is false, then the probability
of a Type II error is called β.
The probability of correctly rejecting a false null hypothesis
equals 1- β and is called power.
Power is covered in detail in another section.
|