Whenever any data is analyzed, our goal is to make the best possible conclusion from the data that is provided. Statistical analysis helps you to make valid conclusions.

Statistical analysis become most useful when differences you are looking for are small.

In any experiment there will be some population. It can be large or small. When its large, what statistic does is that it takes a sample some of the total population and you analyze to make inferences about the population. The logic of statistics is to assume that the sample extrapolate according to that population. This works perfectly in quality control. But in scientific data, there is randomly selected from the total population, and that we want to may be some problems. Now its not just enough that the data are sampled from a population. Statistical tests also assume that each experimental unit is sampled independently of the rest of the units.

Lets taken an experiment in which the measure of means of two sample values are taken. Assume that the means are different.

There can be are two possibilities. Either the mean of the populations is different means or the population truly has the same mean, but difference seen is a coincidence of random sampling. Here comes the importance of the p-value.

Statistical analysis become most useful when differences you are looking for are small.

In any experiment there will be some population. It can be large or small. When its large, what statistic does is that it takes a sample some of the total population and you analyze to make inferences about the population. The logic of statistics is to assume that the sample extrapolate according to that population. This works perfectly in quality control. But in scientific data, there is randomly selected from the total population, and that we want to may be some problems. Now its not just enough that the data are sampled from a population. Statistical tests also assume that each experimental unit is sampled independently of the rest of the units.

Lets taken an experiment in which the measure of means of two sample values are taken. Assume that the means are different.

There can be are two possibilities. Either the mean of the populations is different means or the population truly has the same mean, but difference seen is a coincidence of random sampling. Here comes the importance of the p-value.

The probability p will always range from 0 to 1.

This value will help in answering the above problem that if the populations taken has the same mean overall, then what will be the probability that random sampling would lead to a difference between sample means as smaller or larger than you observed.

Once a threshold P value has been set up for statistical significance, each of the result will be either significant or not.

The symbol p ( ) is used to indicate a probability that is calculated.

- Before the starting of the experiment, set up a threshold p value.
- Set up both null as well as alternative hypotheses.
- Select a test statistic to calculate from the data.
- Calculate the test statistic you selected.
- Compute the p-value.
- Compare the p value to the threshold value.

If the P value is less than the threshold, you get “reject the null hypothesis" and that the difference is "statistically significant".

If the P value is greater than the threshold, you get "do not reject the null hypothesis" and that the difference is "not statistically significant".

Any result is taken to be statistically significant only when the p value is less than a preset threshold value.

The word significant has a distinct meaning from the usual word meaning. Just because a difference is said to be statistically significant it does not mean that it is important and a not statistically significant experiment may be very important.

If a result is statistically significant, there can be two possible explanations:

- The populations are identical, hence no difference. Getting a statistically significant result when they are identical is making a Type I error.
- The populations really are different, hence conclusion is correct.

If a result is not statistically significant, there can be two possible explanations:

- The populations are identical, hence no difference. Your conclusion as of no significant difference is correct.
- The populations really are different. Getting a not statistically significant result when the populations are different is making a Type II error.

p value |
Interpretation |

p< 0.01 | very strong evidence against H_{0} |

0.01 $\leq$ p < 0.05 | moderate evidence against H_{0} |

0.05 $\leq$ p < 0.10 | suggestive evidence against H_{0} |

0.10 $\leq$ p | little or no real evidence against H_{0} |

The p-value for the chi-square test is

P( $\chi^{2}$ $\geq$ $X^{2}$), which means, the probability of observing a value at least as extreme as the test statistic for a chi-square distribution with (r-1)(c-1) degrees of freedom.