What is a parametric test?
A parametric test is any statistical test in which the data are assumed to follow a Normal Distribution. This is in contrast to a non-parametric test, which does not make any assumptions about the data. Parametric tests are more powerful than non-parametric tests, but they are also more restrictive.
When to use a parametric test
There are a variety of parametric tests that can be used, and the type of test you use will depend on the specific situation. In general, parametric tests are most appropriate when the following conditions are met:
-The data is Normally distributed
-The sample size is large (n > 30)
-The variables being compared are interval or ratio level
If one or more of these conditions is not met, a nonparametric test may be more appropriate.
When not to use a parametric test
- When your data is not normally distributed
If your data is not normally distributed, then you cannot use a parametric test. Instead, you will need to use a nonparametric test that does not make assumptions about the underlying distribution of the data.
- When you have fewer than 30 observations
In general, parametric tests are more powerful when you have more data. However, if you have fewer than 30 observations, the Central Limit Theorem does not apply and the distribution of the sample means will not be normal. In this situation, you should still use a parametric test if your data meet the other assumptions listed above. If your data do not meet the other assumptions, then you will need to use a nonparametric test.
- When you are dealing with ranked data
Parametric tests are designed for interval or ratio level data. If your data are at the ordinal level or below, you will need to use a nonparametric test.
How to set up a parametric test
There are a few things you need to know in order to set up a parametric test correctly. By understanding what parametric tests are and how they work, you’ll be able to set up your test and avoid any potential errors. Let’s get started.
Step 1: Choose your test metric
The first step in setting up a parametric test is to decide on the metric you’ll use to measure success. This metric should be something that is important to your business goals, and which you think will be influenced by the change you’re testing. For example, if you’re testing a new checkout flow on an ecommerce site, your metric might be conversion rate (the percentage of visitors who complete a purchase).
Once you’ve decided on a metric, you’ll need to decide how much of an improvement you want to see in order for the test to be considered successful. This is called your ‘success criterion’. For example, if you’re aiming for a 5% increase in conversion rate, your success criterion would be 5%.
Title: How to set up a parametric test – (in most situations parametric tests)
Heading: Step 2: Set up your control and treatment groups
The next step is to set up your control and treatment groups. The control group is the group of users who will see the existing experience (i.e. the ‘control’). The treatment group is the group of users who will see the new experience (i.e. the ‘treatment’).
It’s important to make sure that both groups are as similar as possible, so that any difference in results can be attributed to the change being tested. For example, if you’re testing a new checkout flow on an ecommerce site, make sure that both groups are visitors who are looking to buy a product (i.e. they’re in the same ‘funnel’).
Title: How to set up a parametric test – (in most situations parametric tests)
Heading: Step 3: Collect data
Expansion: Now it’s time to start collecting data! Depending on your platform/tools, there are different ways to do this. Once you have collected enough data from both groups (usually at least 100 conversions per group), you can start analyzing it.
Step 2: Set up your test
Now that you have everything you need, it’s time to set up your parametric test. Here’s a step-by-step guide:
- If you haven’t already, decide on the variable that you want to test and the values that you want to use. Keep in mind that your variable should be something that can be measured, such as height, weight, or time.
- Choose a level of significance for your test. This is usually 0.05 or 0.01.
- Calculate the sample size that you need for your test using a sample size calculator or by consulting a statistical table.
- Collect your data by conducting your experiment or survey according to your plan. Make sure to collect enough data to reach your desired sample size.
- Once you have collected all of your data, it’s time to analyze it and see if there are any significant differences between the groups that you tested.”
Step 3: Analyze your results
After you have collected your data, it is time to analyze it. In many cases, you can use a parametric test to analyze your results. Parametric tests are statistical tests that make assumptions about the data, such as assuming that the data are normally distributed. These assumptions allow parametric tests to be more powerful than nonparametric tests, which do not make any assumptions about the data.
To decide whether or not to use a parametric test, you need to determine whether or not the assumptions of the parametric test are met. If the assumptions are not met, you should not use the parametric test and should instead use a nonparametric test.
There are four main assumptions of parametric tests:
- The data are interval or ratio data.
- The data are normally distributed.
- The population variance is known.
- The sample size is large enough.
If all four of these assumptions are met, you can use a parametric test to analyze your results.
Examples of parametric tests
Parametric tests make assumptions about the data that are being analyzed. This means that these tests are more powerful and can be more accurate than nonparametric tests. However, parametric tests can only be used in certain situations. Let’s take a look at some examples of parametric tests.
A/B testing is a type of statistical hypothesis testing in which two versions (A and B) of a product are compared to determine which one is more effective. Version A is the control while version B is the variant being tested. The goal of A/B testing is to identify any significant difference between the two versions so that a decision can be made about which version is better.
A/B testing can be used to test anything, but it is most commonly used to test changes to a website or app in order to improve conversion rates (e.g. sales, sign-ups, etc.). For example, a company might want to test two different home page designs to see which one results in more people signing up for their newsletter. Or, an eCommerce site might want to test two different checkout processes to see which one results in more sales.
Multi-variate testing is a type of parametric statistical test used when there are more than two variables. It allows for the researcher to test for interactions and main effects.
There are a variety of different types of multi-variate tests, but the most common ones used in research are:
- ANOVA: This is a parametric test that is used to compare the means of two or more groups.
- MANOVA: This is a parametric test that is used to compare the means of two or more groups on multiple dependent variables.
- MANCOVA: This is a parametric test that is used to compare the means of two or more groups on multiple dependent variables while controlling for one or more covariates.
- In general, parametric tests make more assumptions about your data than non-parametric tests.
- Because of this, parametric tests are less robust than non-parametric tests.
- However, parametric tests are more powerful than non-parametric tests, so if the assumptions are met, you should use a parametric test.
- The most important assumption for parametric tests is that the data is normally distributed.
- There are several ways to check for normality, including visual methods (histograms and Q-Q plots) and numerical methods (skewness and kurtosis).
- If your data is not normal, you can transform it to make it normal or you can use a non-parametric test.