Find accurate z, t, chi-square, and F cutoffs for one-tailed and two-tailed hypothesis testing. Use the rejection region to make faster, clearer statistical decisions.
Enter test parameters to calculate critical values
This calculator helps you find the cutoff point for a test statistic so you can decide whether your sample result lands inside the rejection region. The tool follows the standard critical value method used in hypothesis testing for the normal, t, chi-square, and F distributions.
Choose Z, t, chi-square, or F based on the test statistic your method produces under the null hypothesis.
Match the test type to your alternative hypothesis, whether it is left-tailed, right-tailed, or two-tailed.
Set the significance level and add degrees of freedom when the t distribution, chi-square distribution, or F distribution requires them.
Use the critical value and displayed rejection region to compare against your test statistic and make the final decision.
In practice, you usually start with a null hypothesis and an alternative hypothesis. The null hypothesis is the default claim, and the alternative hypothesis is what you want evidence for. Your alpha level sets the maximum Type I error rate you are willing to accept. Once you know the distribution and alpha, a critical value tells you exactly how extreme your test statistic must be before you reject the null hypothesis.
This page is useful whether you are building a confidence interval, checking a classroom problem, reviewing a lab report, or validating the output from statistical software. If your instructor gives you a significance level of 0.05 and asks for a two-tailed test, for example, the calculator automatically splits alpha into 0.025 in each tail and returns the correct cutoff values.
A critical value is not the same as your test statistic. It is the boundary that marks how far into the tail your result must fall before you reject the null hypothesis.
The calculator returns the distribution you selected, the tail type, your significance level, and one or two critical values. For symmetric distributions such as the standard normal distribution and the t distribution, a two-tailed test gives you matching cutoffs around zero. For skewed distributions such as chi-square and F, a two-tailed test produces two different positive cutoffs.
The rejection region shown under the result is the set of test statistic values that would lead you to reject the null hypothesis. If your computed test statistic falls inside that region, the sample gives enough evidence to reject the null at your chosen alpha level.
Suppose a manufacturer claims a machine fills bottles with a mean of 50 ounces. You collect a sample of 64 bottles, find a sample mean of 52.4 ounces, and know the population standard deviation is 4 ounces. You want a two-tailed z test at alpha = 0.05.
First, the calculator gives a z critical value of +/- 1.96. Next, compute the test statistic:
z = (52.4 - 50) / (4 / sqrt(64)) = 2.4 / 0.5 = 4.8
Because 4.8 is greater than 1.96, the test statistic falls in the rejection region. You reject the null hypothesis and conclude that the average fill amount is significantly different from 50 ounces. This is exactly how the critical value method turns a raw test statistic into a yes-or-no decision.
Alpha controls how strict your decision rule is. Smaller alpha values push the cutoff deeper into the tail and make rejection harder.
The same cutoffs also appear in confidence interval work. A 95% two-tailed interval uses the same central coverage as alpha = 0.05.
Degrees of freedom shape the t distribution, chi-square distribution, and F distribution. Lower df usually means more extreme critical values.
The rejection region is where your test statistic must fall for you to reject the null hypothesis. It sits in one tail or both tails, depending on the alternative hypothesis.
If your test statistic lands exactly on the cutoff, most textbook treatments count that value as part of the rejection boundary. Always follow the notation your class, software package, or field standard uses.
If you want to know how to calculate a critical value manually, the key idea is always the same: find the quantile of the right distribution that leaves the correct tail area under the curve.
Left-tailed test: critical value = Q(alpha)
Right-tailed test: critical value = Q(1 - alpha)
Two-tailed test: lower critical value = Q(alpha / 2), upper critical value = Q(1 - alpha / 2)
Here, Q is the quantile function, which is the inverse of the cumulative distribution function. In simple terms, it gives you the x-value where the area to the left equals the target probability.
For a z test, you use the standard normal distribution. For a t test, you use the t distribution with the correct degrees of freedom. For chi-square procedures, you use the chi-square distribution, usually for variance or categorical frequency tests. For ANOVA and variance-ratio work, you use the F distribution with both numerator and denominator degrees of freedom.
This is why choosing the right distribution matters so much. If you use a z critical value when the problem really calls for a t critical value, your rejection region will be too small and your decision can be wrong.
Say you need the t critical value for a two-tailed test with alpha = 0.05 and sample size n = 16. First, convert sample size to degrees of freedom: df = n - 1 = 15. Because the test is two-tailed, split alpha into 0.025 in each tail. Then find Q(0.975) from the t distribution with 15 degrees of freedom.
The result is about 2.131. So the two critical values are -2.131 and 2.131. If your t statistic is larger than 2.131 in absolute value, you reject the null hypothesis.
The examples below show where each distribution appears in real statistics work and how the cutoff changes with alpha, tail direction, and degrees of freedom.
If you are testing a known-variance process at alpha = 0.05, the z critical values are +/- 1.9600. This setup is common in manufacturing or quality control when the population standard deviation is already known from a stable process.
With n = 16, you usually have df = 15. At alpha = 0.05 for a right-tailed test, the t critical value is about 1.7531. This is useful when you want to show that a new method increases a mean score rather than simply changes it.
Suppose you sort customer choices into six categories, which gives df = 5. At alpha = 0.01, the right-tail chi-square critical value is about 15.0863. If your test statistic is larger than that, the observed counts differ from the expected pattern enough to reject the null hypothesis.
For a sample of 21 observations, df = 20. With alpha = 0.05, the lower chi-square cutoff is about 9.5908 and the upper cutoff is about 34.1696. A sample variance that produces a test statistic below the lower value or above the upper value would lead to rejection.
If you compare four groups with a total sample size of 28, the ANOVA degrees of freedom are df1 = 3 and df2 = 24. At alpha = 0.05, the right-tail F critical value is about 3.0088. An F statistic above that value suggests that at least one group mean differs from the others.
Many textbook questions give a confidence level instead of alpha. Convert it with alpha = 1 - confidence level. So a 99% confidence interval means alpha = 0.01, and a two-tailed test splits that into 0.005 in each tail before you look up the quantile.
One of the biggest content gaps on LiteCalc was distribution selection. If you choose the wrong model for your test statistic, even a perfectly calculated cutoff can lead to the wrong conclusion.
A z critical value is the right choice when your procedure assumes a standard normal test statistic. That often happens when the population standard deviation is known, when the sample is large enough for a normal approximation, or when you are working with proportions under standard textbook assumptions.
Use the t distribution when the population standard deviation is unknown and you estimate it from the sample. The smaller your sample, the more the heavier tails matter. As degrees of freedom grow, the t distribution moves closer to the standard normal distribution.
Chi-square critical values appear in goodness-of-fit tests, tests of independence in contingency tables, and variance tests for normally distributed data. Unlike z and t, the chi-square distribution is not symmetric around zero, so all of its critical values are nonnegative.
The F distribution compares two scaled variances. You will see it in ANOVA, overall regression tests, and some variance-comparison settings. It needs two degrees of freedom values because one comes from the numerator source of variation and one comes from the denominator source.
Explore more LiteCalc tools that help with classroom work, data analysis, and everyday number problems.
Use this descriptive statistics tool to calculate mean, median, mode, and range before you move into formal hypothesis testing.
Convert counts into ratios and proportions when you are preparing inputs for categorical analysis and sample summaries.
Simplify fraction arithmetic when you are working through hand calculations for formulas, pooled estimates, and classroom examples.
Keep a geometry helper close by when your coursework mixes probability, measurement, and applied math problems.
Jump to vector math when your course moves from statistics into algebra, physics, or data modeling.
Use this internal calculator for fast angle and function work when your homework combines multiple math topics.
These answers cover the search questions people ask most often before using a critical value calculator.
A critical value is the cutoff point that separates the rejection region from the non-rejection region for a test statistic. If your test statistic falls beyond that cutoff at your chosen alpha level, you reject the null hypothesis.
For a 95% confidence level in a two-tailed z test, alpha is 0.05, so each tail gets 0.025. The upper z critical value is the 0.975 quantile of the standard normal distribution, which is about 1.96, and the lower cutoff is -1.96.
First convert sample size to degrees of freedom, which is usually n - 1 for a one-sample t test. Then choose your alpha level and tail type, and look up the matching t distribution quantile. For example, a sample size of 16 gives df = 15.
Use a one-tailed test when your alternative hypothesis points in only one direction, such as greater than or less than. Use a two-tailed test when you care about any meaningful difference in either direction.
In a two-tailed chi-square variance test, you split alpha across both tails of the chi-square distribution. That creates a lower cutoff and an upper cutoff, and values outside that middle range lead to rejection.
An F distribution depends on two degrees of freedom values. The numerator degrees of freedom usually come from the variation you are testing, and the denominator degrees of freedom usually come from the residual or within-group variation.
Both methods lead to the same decision when used correctly. The critical value method compares the test statistic to a cutoff, while the p-value method compares the observed tail probability to alpha.
A smaller alpha makes the rejection region narrower in probability but farther out in the tails, so the critical value becomes more extreme. A larger alpha moves the cutoff closer to the center and makes rejection easier.