Javascript required
Skip to content Skip to sidebar Skip to footer

Chi Square Goodness Fit Continuous Distribution

The chi-square test is a very simple method for testing the fit of a particular distribution, such as the Poisson, to a sample set of data. It compares the observed frequency distribution in the sample, O, with the expected frequency distribution, E, in a selected distribution for which parameters have been pre-specified or have been estimated from the sample. The difference between the observed and expected (O-E) is squared to remove negative signs, and then standardized by the expected frequency in that class or range, in order to obtain a standardized measure of the difference between the two distributions. The sum of these standardized differences is then calculated and compared to a chi-square distribution with n-1 degrees of freedom, where n is the number of frequency classes used in the calculation. The formula used is thus of the form:

The second form of the expression shows the expected values, E i, as being the theoretical probability that an event falls into class i times the total number of events, N. This expression arises as an approximation to the multinomial distribution, as described previously. It is simplest to see if there are only two classes, in which case the distribution is Binomial with probabilities p and (1-p). The mean of the Binomial is np and the variance is np(1-p), so a standardized measure is of the form:

If p is not too small (typically >0.1) and N is also not too small, the Binomial can be approximated by the Normal distribution, and the expression for z above (which is the usual form for testing the significance of a proportion) is approximately a Normal variate. Since the square of a Normal variate is distributed as a chi-square the first part of the link between the chi-square test and the Binomial can be seen. The second step is to consider calculation of the chi-square test for the Binomial case, for which the observed values (Oi) are X and N-X and the expected values (Ei) are Np and N(1-p). Putting these into the expression for the chi-square test formula above yields the z 2 expression, hence the two tests are identical in this case.

Example: A simple example serves to illustrate the use of this method. The diagram below shows the number of trees of a particular species in a section of woodland divided into a sampling grid of 25 cells. The cell size is 10meters x 10meters with cell entries being simple counts (the total count is 100). The question posed is "does this pattern of counts suggest a random distribution of this tree species in the wood or is it more or less clustered than would be expected if the trees were distributed randomly"?

3

2

6

2

2

2

4

3

7

3

2

6

6

9

4

5

6

3

5

5

3

7

3

2

0

If the pattern was truly random, each of the trees might be expected to be located anywhere with the study area. There are 100 trees and 25 cells, so the average density (using this particular grid arrangement) is 4 trees per cell. The probability that a tree is located somewhere specific in the region is small, and the number of trees is quite large, and if we assume the locations are independent and random we can use the Poisson distribution as our model of the expected distribution. The Poisson has only one parameter, the mean, and we know this is estimated as 4 from our sample, so we can use this to create a table of the expected frequencies that we can compare to the observed pattern. For example, there is only one cell with 9 trees, but there are 4 that have 6 trees. The entire table and calculations of the chi-square statistic are shown below:

Frequency

0

1

2

3

4

5

6

7

8

9

10

TOTALS

Observed

1

0

6

6

2

3

4

2

0

1

0

25

Expected

.5

1.8

3.7

4.9

4.9

3.9

2.6

1.5

.7

.3

.1

24.93

|O-E|

.5

1.8

2.3

1.1

2.9

.9

1.4

.5

.7

.7

.1

-

(O-E) 2 /E

.64

1.83

1.49

.25

1.7

.21

.75

.18

.74

1.35

.13

9.29

Note that the observed row sums to 25 (the number of cells), so the expected row is obtained from the individual terms of the Poisson distribution multiplied by 25. We can then sum the last line and compare the total (9.29) with the chi-square distribution with 8 degrees of freedom (df). The value df=8 is the number of classes (#classes=10) minus 1, minus a further 1 because we estimated the mean of the fitted Poisson from the data. However, notice that the bold figures in the last line of this calculation account for 50% of the total, so the tails are having a disproportionate effect on the total sum. For this reason it is recommended that when computing this statistic the theoretical frequency in each category be at least 5 (which may or may not be achievable in a sensible manner). For a situation such as this we would typically group the tail-end classes and re-construct the table as follows:

Frequency

<2

3

4

5

>6

Observed

7

6

2

3

7

Expected

5.95

4.88

4.88

3.91

5.30

|O-E|

1.10

1.12

2.88

0.91

2.89

(O-E) 2 /E

0.18

0.25

1.70

0.21

0.54

The sum of the last row is now 2.9, much smaller than before because the high impact of differences between small values in the distribution tails has been removed. The degrees of freedom are now 3 since there are only 5 frequency classes, and the chi-square distribution with df=3 and upper tail probability of 5% has a critical value of 7.8, so with a computed value of 2.9 we cannot reject the null hypothesis that the distribution is random based on the information provided.

This kind of test can be applied for other distributions, such as the Normal, but because the Normal is a continuous distribution a decision has to be made on the x-intervals to be used. This is typically based on inspection of the observed data and how these were measured.  Furthermore, assuming that the parameters of the fitted Normal distribution are obtained from the mean and variance of the sample dataset, two extra degrees of freedom (rather than 1 extra in the case of the Poisson) must be allowed for, so df=#classes-3. If the goodness-of-fit test fails, it may be useful to apply a data transform to the sample and then to re-test the transformed set with the fitted distribution now having revised parameters based on the transformed dataset.

mcbrydejoher1959.blogspot.com

Source: http://www.statsref.com/HTML/chi-square2.html