Monday, March 12, 2007

TYPE I AND TYPE II ERRORS

In an experimental setting where a statistical test of a hypothesis is conducted one may either reject the null hypothesis ‘Ho’, or fail to reject Ho. Let’s assume that the true population parameter we are testing is X. Our null hypothesis may be Ho: X=Xo.

The probability of rejecting the null hypothesis when it is true can be defined by:

Alpha = P (reject Ho X = Xo)

Rejecting the null hypothesis when it is actually true is referred to in statistics as a type I error. Therefore alpha is the probability of ‘committing’ a type 1 error. In a basic statistics, when you conduct a basic t- test ( i.e. reject Ho if t > t-critical) at the 5% level of significance, you are establishing a 5% chance of committing a type one error.

Of course, you may fail to reject Ho, or loosely speaking ‘accept’ Ho.

The probability of ‘accepting’ Ho when it is false can be defined by:

Beta = P (accept Ho X = Xa)

Speaking heuristically, accepting a false Ho is referred to as a type II error. Beta is therefore the probability of committing a type 2 error.

It turns out, that as you increase the significance level of a test ( by making alpha lower and decreasing the probability of a type I error), the probability of a type II error (beta) increases. This is the theoretical basis for ‘type II error bias.’

No comments: