Non-parametric tests are experiments which do not require the underlying population for assumptions. It does not rely on any data referring to any particular parametric group of probability distributions. Non-parametric methods are also called distribution-free tests since they do not have any underlying population.

## What is Non-parametric Test?

Non-parametric tests are the mathematical methods used in statistical hypothesis testing, which do not make assumptions about the frequency distribution of variables that are to be evaluated. The non-parametric experiment is used when there are skewed data, and it comprises techniques that do not depend on data pertaining to any particular distribution.

The word non-parametric does not mean that these models do not have any parameters. The fact is, the characteristics and number of parameters are pretty flexible and not predefined. Therefore, these models are called distribution-free models.

### Non-Parametric T-Test

Whenever a few assumptions in the given population are uncertain, we use non-parametric tests, which are also considered parametric counterparts. When data are not distributed normally or when they are on an ordinal level of measurement, we have to use non-parametric tests for analysis. The basic rule is to use a parametric t-test for normally distributed data and a non-parametric test for skewed data.

### Non-Parametric Paired T-Test

The paired sample t-test is used to match two means scores, and these scores come from the same group. Pair samples t-test is used when variables are independent and have two levels, and those levels are repeated measures.

## Types of Non- parametric Test

The important non-parametric tests are:

- Kruskal- Wallis Test
- Friedman Test
- 1-Sample Sign Test
- Mood’s Median Test
- Spearman Rank Correlation
- Mann-Kendall Trend Test
- Mann-Whitney Test.

Here, we are going to discuss the two types of non-parametric test, such as Sign Test, Kruskal- Wallis Test here in detail.

### Kruskal-Wallis H-Test

Kruskal Wallis H test is used to test whether two or more populations are identical. In this test, the null hypothesis is H_{0}: Î¼_{1 }= Î¼_{2} = Î³_{3} (when there are three populations) and the alternative hypothesis is H1: Î¼_{1}â‰ Î¼_{2}â‰ Î³_{3}. In the Kruskal-Wallis test, we first evaluate the ranks of the observation lists in the samples and then determine the rank sums for each sample. To calculate the test result, we use the below formula:

where,

m is the number of comparison groups,

n is the total sample size

n_{i} represents the number of observations in the i^{th} sample,

R_{i} denotes the rank sum of the i^{th} sample.

Here, we have to use the x^{2} distribution with m – 1, degree of freedom (df) and Î± level of significance to determine the critical value. If the estimated value is less than x^{2}, then the null hypothesis is accepted, else rejected.

### Sign Test Statistics

The sign test is conducted under the following conditions.

- When we need to compare paired data
- The paired data obtained from similar conditions
- No assumptions made about the original population

Sign Test is merely based on the signs (+ or -) of the deviations x-y and not on their magnitudes. This test is applicable when zero differences or tie between the paired observations cannot occur. If zero differences or tie occur, then they will be eliminated from the analysis, and the number of paired observations counted is reduced. This method can be used to examine individual data also.

### Sign Test Assumption

Let (x_{1,Â }y_{1}),(x_{2}, y_{2}),….,(x_{n}, y_{n}) be paired observations and d_{i }= x_{iÂ }– y_{i} are the differences between the observations, where i=1, 2, …., n.

- The value of d
_{i}can be positive or negative, and all those values are independent. - Each d
_{i}comes from the same continuous population. - The values x
_{i}and y_{i}represent the order, so that the comparisons “greater than”, “less than”, and “equal to” are meaningful.

Now we will take here the null hypothesis, H_{0}: p=Â½ = 0.5 and Alternate hypothesis, H_{1}: q â‰ Â½=0.5.

Let us consider the number of positive signs in d_{i} be m. Therefore,

**p = m/n and q = n – (m/n)**

**Case 1(n < 30)**

If either np or nq is less than 5, then we can use binomial approximation.

Where

“m” is the number of positive deviations.

If P > Î± then we accept the null hypothesis else we reject it.

**Case 2 (n < 30)**:

If both np or nq > 5, then we use a normal distribution for approximation.

The limits of the approval region are given by (p-z_{p}Ïƒ_{p}, p+z_{p}Ïƒ_{p}), where z_{p} is the value obtained from the standard normal table with an Î± level of significance. If the value of Î± is not given, we consider it as, Î± = 0.05.

If ‘p’ lies within (p-z_{p}Ïƒ_{p}, p+z_{p}Ïƒ_{p}) we accept the hypothesis, else we reject it.

**Case 3 (n â‰¥ 30)**

If n â‰¥ 30, then we can find Mean = np and standard deviation = npq

And, a = number of negative observations

## Advantages and Disadvantages of Non-Parametric Test

The advantages of the non-parametric test are:

- Easily understandable
- Short calculations
- Assumption of distribution is not required
- Applicable to all types of data

The disadvantages of the non-parametric test are:

- Less efficient as compared to parametric test
- The results may or may not provide an accurate answer because they are distribution free

## Applications of Non-Parametric Test

The conditions when non-parametric tests are used are listed below:

- When parametric tests are not satisfied.
- When testing the hypothesis, it does not have any distribution.
- For quick data analysis.
- When unscaled data is available.

For more Maths-related articles, visit BYJU’S –Â The Learning App to learn with ease by exploring more videos.