Sample Size Calculator: Margin of Error & Confidence Intervals
Quick Answer
- *For a 95% confidence level and ±5% margin of error, you need 384 respondents — regardless of total population size (for large populations).
- *The formula: n = (Z² × p × (1−p)) / E² where Z = 1.96 for 95% confidence, p = 0.5, E = 0.05.
- *Smaller margins of error require exponentially larger samples — going from ±5% to ±3% nearly triples your required sample size.
- *A/B tests require separate power calculations — a 5% lift detection typically needs ~1,600 participants per group.
The Sample Size Formula
The standard formula for calculating sample size in a proportion survey is:
n = (Z² × p × (1−p)) / E²
Where:
- n = required sample size
- Z = Z-score corresponding to your confidence level (1.96 for 95%, 2.576 for 99%, 1.645 for 90%)
- p = estimated proportion in the population (use 0.5 if unknown — this gives the most conservative, largest required sample)
- E = desired margin of error expressed as a decimal (0.05 for ±5%)
When you don't know the true proportion, setting p = 0.5 is standard practice. It maximizes p × (1−p), which peaks at 0.25 when p = 0.5, giving you the largest — and most conservative — sample size estimate.
Worked Example
You want to survey customers about product satisfaction. You don't know what percentage will respond positively, so you use p = 0.5. You want 95% confidence and a ±5% margin of error.
n = (1.96² × 0.5 × 0.5) / 0.05²
n = (3.8416 × 0.25) / 0.0025
n = 0.9604 / 0.0025
n = 384 respondents
That's the number you need to get a statistically valid result — assuming a large or unknown population. You don't need to survey the entire customer base. This is why national polls with 1,500 respondents can represent 330 million Americans.
Sample Size Reference Table
Here are the required sample sizes for common survey configurations, all using p = 0.5 (maximum variance):
| Confidence Level | Margin of Error | Required Sample Size |
|---|---|---|
| 95% | ±3% | 1,068 |
| 95% | ±5% | 384 |
| 95% | ±10% | 97 |
| 99% | ±5% | 664 |
Notice the non-linear relationship. Halving your margin of error from ±10% to ±5% nearly quadruplesthe required sample (97 → 384). This is because E is squared in the denominator — precision is expensive.
Finite Population Correction
The formula above assumes your population is very large (or infinite). When your population is small — say, you're surveying a team of 200 employees — you can apply a correction that reduces the required sample size:
nadj = n / (1 + (n−1) / N)
Where N is the total population size and n is the sample size from the standard formula.
Example: the standard formula says you need 384 respondents. Your company only has 500 employees (N = 500):
nadj = 384 / (1 + (384−1) / 500)
nadj = 384 / (1 + 0.766)
nadj = 384 / 1.766
nadj = 217 respondents
The correction matters significantly when your sample is more than 5% of your total population. If your population is ≥ 10,000, skip it — the difference is negligible.
Margin of Error Formula (Reverse)
Sometimes you already have a sample and want to know your margin of error. The reverse formula is:
E = Z × √(p(1−p) / n)
If you surveyed 500 people and 60% said yes (p = 0.6), at 95% confidence (Z = 1.96):
E = 1.96 × √(0.6 × 0.4 / 500)
E = 1.96 × √(0.00048)
E = 1.96 × 0.0219
E = ±4.3%
So your result is 60% ± 4.3% — the true value likely falls between 55.7% and 64.3%.
Confidence Intervals Explained
A 95% confidence interval does not mean “there is a 95% chance the true value is in this range.” The true value either is or isn't in the interval — it's fixed. What it means is:
If you ran this survey 100 times using the same method, 95 of those 100 intervals would contain the true population value. Five would miss it entirely. The interval is a statement about the procedure, not about any single result.
This distinction matters when interpreting polling results. A poll showing 52% support with a ±3% MOE doesn't tell you the true value is between 49%–55%. It tells you that the method used produces intervals that capture the true value 95% of the time.
A/B Testing Sample Size
Surveys and A/B tests have different sample size requirements. For A/B tests, you need to account for:
- Statistical significance (α) — typically set at 0.05 (5% false positive rate)
- Statistical power (1 − β) — typically 80% or 95%; the probability of detecting a real effect
- Minimum detectable effect (MDE) — the smallest lift you care about detecting
- Baseline conversion rate — your current performance
A practical example: you want to detect a 5% absolute lift (e.g., conversion goes from 50% to 55%) at 95% statistical power with α = 0.05. Using standard power analysis tables, this requires approximately 1,600 participants per group— 3,200 total across control and treatment.
The most common mistake in A/B testing is stopping early when results look good. Running a test until it “turns significant” inflates the false positive rate well above your stated α. Decide your sample size before you start, then wait.
Real-World Benchmark: How Pew Research Sets Sample Sizes
Pew Research Center, one of the most respected polling organizations in the U.S., typically conducts national surveys with around 1,500 respondentsto achieve a ±3% margin of error at 95% confidence. Their methodology documentation explains that this sample size — drawn from a representative panel — produces reliable estimates for the adult U.S. population of roughly 260 million people.
The polling industry standard for national political polls is typically 900–1,200 respondents, which delivers roughly ±3% MOE at 95% confidence. This is why major election polls from organizations like Gallup, AP-NORC, and Marist typically report a “±3 percentage points” margin.
Common Mistakes
Using Too Small a Sample
An underpowered study is arguably worse than no study — it gives you false confidence in results that may be noise. A sample of 50 gives a ±14% margin of error at 95% confidence. You can't draw meaningful conclusions from that. Before collecting data, calculate your required n and commit to hitting it.
Confusing Confidence Level with Probability
Once your survey is complete and the interval is calculated, it's fixed. Saying “there's a 95% chance the true value is between 48% and 56%” misrepresents what confidence intervals mean. The 95% applies to the long-run reliability of the method, not to any single interval.
Survivorship Bias in Response Rates
A sample size calculation assumes random, representative sampling. If your survey has a 20% response rate, the 80% who didn't respond may differ systematically from those who did. The math gives you the right n, but biased sampling can still produce wrong answers. Sample quality matters as much as sample size.
Calculate your exact sample size
Use our free Sample Size Calculator →Frequently Asked Questions
How do you calculate sample size for a survey?
Use the formula n = (Z² × p × (1−p)) / E², where Z is the Z-score for your confidence level (1.96 for 95%), p is the estimated proportion (use 0.5 for maximum variance), and E is your desired margin of error (0.05 for ±5%). For a standard 95% confidence level and ±5% margin of error, this yields n = (1.96² × 0.5 × 0.5) / 0.05² = 384 respondents.
What is margin of error?
Margin of error (MOE) is the range within which the true population value is expected to fall, given your sample results. If a poll shows 52% support with a ±3% margin of error, the true value likely falls between 49% and 55%. The formula is E = Z × √(p(1−p)/n). A smaller MOE requires a larger sample.
What does a 95% confidence interval mean?
A 95% confidence interval means: if you ran the same survey 100 times using the same sampling method, 95 of those intervals would contain the true population value. It does notmean there is a 95% probability the true value is in this particular interval — that's a common misinterpretation. The interval is a statement about the procedure, not about any single result.
How large of a sample size do I need?
It depends on your required margin of error and confidence level. For 95% confidence: ±3% MOE needs 1,068; ±5% MOE needs 384; ±10% MOE needs 97. For 99% confidence with ±5% MOE, you need 664. National polling organizations like Pew Research typically use around 1,500 respondents to achieve ±3% MOE nationally.
What is the difference between confidence level and confidence interval?
Confidence level is the probability (e.g., 95%) that your method will produce an interval containing the true value — you set this before sampling. Confidence interval is the actual numeric range (e.g., 48%–54%) calculated from your specific sample data. A higher confidence level produces wider intervals. Increasing from 95% to 99% confidence requires a larger sample to maintain the same margin of error.