Tests of Significance

In Statistics, tests of significance are the method of reaching a conclusion to reject or support the claims based on sample data. The statistics are a special branch of Mathematics which deals with the collection and calculation over numerical data. This subject is well known for research based on statistical surveys. During a statistical process, a very common as well as an important term we come across is “significance”.

Statistical significance is very important in research not only in Mathematics but in several different fields such as medicine, psychology and biology. There are many methods through which the significance can be tested. These are known as significance tests. Let us learn about significance testing in detail.

Definition of Significance Testing

In statistics, it is important to know if the result of an experiment is significant enough or not. In order to measure the significance, there are some predefined tests which could be applied. These tests are called the tests of significance or simply the significance tests.

This statistical testing is subjected to some degree of error. For some experiments, the researcher is required to define the probability of sampling error in advance. In any test which does not consider the entire population, the sampling error does exist. The testing of significance is very important in statistical research.

The significance level is the level at which it can be accepted if a given event is statistically significant. This is also termed as p-value. It is observed that the bigger samples are less prone to chance, thus the sample size plays a vital role in measuring the statistical significance. One should use only representative and random samples for significance testing.

In short, the significance is the probability that a relationship exists. Significance tests tell us about the probability that if a relationship we found is due to random chance or not and to which level. This indicates about the error that would be made by us if the found relationship is assumed to exist.

Tests of Significance in Statistics

Technically speaking, the statistical significance refers to the probability of a result of some statistical test or research occurring by chance. The main purpose of performing statistical research is basically to find the truth. In this process, the researcher has to make sure about the quality of sample, accuracy, and good measures which need a number of steps to be done. The researcher has to determine whether the findings of experiments have occurred due to a good study or just by fluke.

The significance is a number which represents probability indicating the result of some study has occurred purely by chance. The statistical significance may be weak or strong. It does not necessarily indicate practical significance. Sometimes, when a researcher does not carefully make use of language in the report of their experiment, the significance may be misinterpreted.

The psychologists and statisticians look for a 5% probability or less which means 5% results occur due to chance. This also indicates that there is a 95% chance of results occurring NOT by chance. Whenever it is found that the result of our experiment is statistically significant, it refers that we should be 95% sure the results are not due to chance.

Process of Significance Testing

In the process of testing for statistical significance, there are the following steps:

  1. Stating a Hypothesis for Research
  2. Stating a Null Hypothesis
  3. Selecting a Probability of Error Level
  4. Selecting and Computing a Statistical Significance Test
  5. Interpreting the results

Types of Errors

There are basically two types of errors:

  • Type I
  • Type II

Type I Error

The type I error occurs when the researcher finds out that the relationship assumed through research hypothesis does exist; but in reality, there is evidence that it does not exist. In this type of error, the researcher is supposed to reject the research hypothesis and accept the null hypothesis, but its opposite happens. The probability that researchers commit Type I error is denoted by alpha (α).

Type II Error

The type II error is just opposite the type I error. It occurs when it is assumed that a relationship does not exist, but in reality it does. In this type of error, the researcher is supposed to accept the research hypothesis and reject the null hypothesis, but he does not and the opposite happens. The probability that a type II error is committed is represented by beta (β).

Types of Statistical Tests

One-tailed and two-tailed are two types of statistical tests that are used alternatively for the computation of the statistical significance of some parameter in a given set of data. These are also termed as one-sided and two-sided tests.

  • In research, the one-tailed test can be used when the deviations of the estimated parameter in one direction from an assumed benchmark value are considered theoretically possible.
  • On the other hand, the two-tailed test should be utilized when the deviations in both directions of benchmark value are considered as theoretically possible.

The word “tail” is used in the names on these tests since the extreme points of the distributions in which observations tend to reject the null hypothesis are quite small and “tail off” to zero similar to the bell curve or normal distribution. The choice of one-tailed or two-tailed significance test depends upon the research hypothesis.

Example

  1. The one-tailed test can be utilized for the test of the null hypothesis such as, boys will not score significantly higher marks than girls in 10 Standard. In this example, the null hypothesis does indirectly assume the direction of the difference.
  2. The two-tailed test could be utilized in the testing of the null hypotheses: There is no significant difference in scores of boys and girls in 10 Standard.

What is p-Value Testing?

In the context of the statistical significance of a data, the p-value is an important terminology for hypothesis testing. The p-value is said to be a function of observed sample results which is being used for testing of statistical hypothesis. A threshold value is to be selected before the test is performed. This value is known as the significance level that is traditionally 1% or 5%. It is denoted by α.

In the case when the p-value is smaller than or equal to significance level (α), the data is said to be inconsistent for our assumption of the null hypothesis to be true. Therefore, the null hypothesis should be rejected and an alternative hypothesis is supposed to be accepted or assumed as true.

Note that the smaller the p-value is, the bigger the significance should be as it indicates that the research hypothesis does not adequately explain the observation. If the p-value is calculated accurately, then such test controls type I error rate not to be greater than the significance level (α). The use of p-values in statistical hypothesis testing is very commonly seen in a wide variety of areas such as psychology, sociology, science, economics, social science, biology, criminal justice etc.

close
close

Play

&

Win