The statistical practice of
hypothesis testing is widespread not only in statistics but also throughout the
natural and social sciences. When we conduct a hypothesis test there a couple
of things that could go wrong. There are
two kinds of errors, which by design cannot be avoided, and we must be aware
that these errors exist. The errors are given the quite pedestrian names of
type I and type II errors.
The first kind of error that is possible
involves the rejection of a null hypothesis that is actually true. This kind of
error is called a type I error and is sometimes called an error of the first
kind. On the other hand the other kind of error that is possible occurs when we
do not reject a null hypothesis that is false. This sort of error is called a
type II error and is also referred to as an error of the second kind. The differences are stated as under:-
1)
Hypothesis: Null hypothesis is related to type I error and Alternate hypothesis is related
to Type II error. In statistical
hypothesis testing, a type I error is the rejection of a true null hypothesis,
while a type II error is the non-rejection of a false null hypothesis.
2)
Occurrence: A type I error, also known as an error
of the first kind, occurs when the null hypothesis (H0) is true, but is
rejected. Type II error, also known as an error of the second kind, occurs when
the null hypothesis is false, but erroneously fails to be rejected. Type II
error means accepting the hypothesis which should have been rejected.
3)
Effect: Type I errors are equivalent to false
positives. Type II errors are equivalent to false negatives.
4)
Decision based on belief : A Type I error occurs
when we believe a falsehood. Type II error is committed when we fail to believe
a truth.
5)
Control: Type I errors can be controlled. The value
of alpha, which is related to the level of significance that we selected has a
direct bearing on type I errors.
6)
Rate of error:
The rate of the type I error is called the size of the test and denoted
by the Greek letter α (alpha).It usually equals the significance level of a
test. If type I error is fixed at 5 %, it means that there are about 5 chances in
100 that we will reject H0 when H0 is true. The probability of a type II error
is given by the Greek letter β (beta). This number is related to the power or
sensitivity of the hypothesis test, denoted by 1 – beta.
7)
Error avoidance: Type I and type II errors are part
of the process of hypothesis testing. Although the errors cannot be completely
eliminated, we can minimize one type of error. If we try to minimize one the
other increases and both are inversely related to each other.
8)
Testing used: Prescriptive testing is used to
increase the level of confidence, which in turn reduces Type I errors. Descriptive testing is used to better describe
the test condition and acceptance criteria, which in turn reduces Type II
errors.
9)
Level of confidence The chances of making a Type I
error are reduced by increasing the level of confidence.
Many statisticians are now adopting a third type of
error, a type III, which is where the null hypothesis was rejected for the wrong
reason. In an experiment, a researcher
might assume a hypothesis and perform research. After analyzing the results
statistically, the null is rejected. The problem is, that there may be some
relationship between the variables, but it could be for a different reason than
stated in the hypothesis. An unknown process may underlie the relationship.(Source: Reference material in Research Methodology in Social Science and search engines)
Thank u mam
ReplyDelete