ANOVA (Analysis of Variance)

Many people carry the impression that ANOVA is meant for testing differences in variances. However, contrary to what its name indicates ANOVA is used for testing differences in means and not in variances.

The basic concept behind ANOVA is that if the means of the treatments or factors are different the variances between treatments will be different that variance within treatments. The ratio of two variances follows an F distribution. So ANOVA essentially involves testing the ratio of variance between (the influence of the treatment) against the variance within (the error component) using F test.

Variances can be expressed in terms of sum of squares (SS). Consider an ANOVA for three factors A, B and C. In this case for ANOVA the total sum of squares is partitioned into SS due to the main effects of the various factors, the interaction effects and the error.

SS Total = SS A + SS B + SS C + SS AB + SS AC + SS BC + SS ABC + SS Error

For ANOVA it is assumed that:
  • Variances are homogeneous
  • Data is normal
Conceptually speaking ANOVA is a test of hypothesis where the null hypothesis (Ho) is that the means of the treatments are equal. This means the alternative hypothesis (H1) is that not all means are equal.

Ho : Mu 1 = Mu 2 = ... = Mu k (k factors)
H1 : Not all Mu are equal

If the ratio of variances between to variance within is small then it can be concluded that the differences between the means of the treatments is not significant or there is not enough evidence that the means of the treatments are significantly different.

F ratio is compared with the critical value of F from the F distribution table. F ratio being more than the critical value means Ho has to be rejected, hence H1 is accepted. The F value depends on the level of significance (alpha value, generally kept at 0.05 or 5%).

No comments:

Post a Comment