ANCOVA (analysis of covariance) tests if 2+ population means are equal while controlling for 1+ background variables.
Example: do medicines A, B and C result in equal mean blood pressures when controlling for age?
ANCOVA basically combines ANOVA and regression. This tutorial walks you through the analysis with an example in SPSS.
ANOVA (analysis of variance) tests if 3+ population means are all equal.
Example: do the pupils of schools A, B and C have equal mean IQ scores?
This super simple introduction quickly walks you through the basics such as assumptions, null hypothesis and post hoc tests.
Repeated measures ANOVA tests if 3+ variables have equal means in some population.
Example: are the mean scores on IQ tests A, B and C equal for all Dutch children?
This simple introduction quickly walks you through the basics.
A binomial test examines if a population percentage is equal to x.
Example: is 45% of all Amsterdam citizens currently single? Or is it a different percentage?
This simple tutorial quickly walks you through the basics.
A boxplot is a chart showing
This tutorial quickly walks you through with some examples.
A chi-square goodness-of-fit test examines if a categorical variable has some hypothesized frequency distribution in some population.
A chi-square independence test evaluates if two categorical variables are related in some population.
This simple introduction explains how the test basically works and how to run and interpret it.
Cohen’s D is the effect size measure of choice for t-tests.
This simple tutorial quickly walks you through
Cramér’s V is a number between 0 and 1 that indicates how strongly two nominal variables are correlated.
Because it's suitable for categorical variables, Cramér’s V is often used as an effect size measure for a chi-square independence test.
Dichotomous variables are variables that hold precisely two distinct values.
Example: sex can only be male or female.
Some analyses that are only suitable for dichotomous variables are
Effect size is an interpretable number that quantifies the difference between data and some hypothesis.
Effect size measures are useful for comparing effects across and within studies. This tutorial helps you to choose, obtain and interpret an effect size for each major statistical procedure.
Factor analysis examines which variables in your data measure which underlying factors.
This tutorial illustrates the ideas behind factor analysis with a simple step-by-step example in SPSS.
A frequency distribution is an overview of all values in some variable and how often these occur.
Like so, a frequency distribution shows how frequencies are distributed over values. This tutorial quickly makes things clear with some simple examples.
Kendall’s Concordance Coefficient W is a number between 0 and 1 that indicates interrater agreement.
This tutorial explains the basic idea behind Kendall’s W and shows how to get it from SPSS.
Kendall’s Tau is a number between -1 and +1 that indicates to what extent 2 variables are monotonously related.
This tutorial quickly walks you through some basics such as assumptions, significance and confidence intervals for Kendall’s Tau.
The Kolmogorov-Smirnov test examines if a variable is normally distributed in some population.
This “normality assumption” is required for t-tests, ANOVA and many other tests. This tutorial shows how to run and interpret a Kolmogorov-Smirnov test in SPSS with some simple examples.
Levene’s test examines if 2+ populations have equal variances on some variable. This condition -known as the homogeneity of variance assumption- is required by t-tests and ANOVA.
So how to run and interpret this test in SPSS? This simple tutorial quickly walks you through.
Logistic regression is a technique for predicting a dichotomous outcome variable from 1+ predictors.
This simple introduction quickly walks you through all logistic regression basics with a downloadable example analysis.
Measurement levels are types of variables that tell you how they should be analyzed. There's 4 types:
This tutorial quickly walks you through with a simple flowchart and some examples.
The median is basically the value that separates the 50% lowest from the 50% highest values.
Example: a median income of $2,500 means that 50% of all people earn less and 50% earn more than that amount.
The normal distribution is a bell-shaped probability density function.
This tutorial quickly covers all you need to know such as
A null hypothesis is an exact statement about a population that we try to reject with sample data.
Example: 20% of some population carry virus X. If a sample from this population shows a very different percentage, then we reject this null hypothesis.
A Pearson correlation is a number between -1 and +1 that indicates how strongly two variables are linearly related.
This simple tutorial quickly explains the basics with outstanding illustrations and examples.
A probability density function is a function from which probabilities for ranges of outcomes can be obtained.
Example: the probability is 95% that your IQ is between 70 and 130 points. This statement is based on the normal distribution -probably the best known probability density function.
So how does that work? And how do density functions differ from probability distribution functions? This tutorial quickly clears things up.
The Shapiro-Wilk test examines if a variable is normally distributed in a population. This assumption is required by some statistical tests such as t-tests and ANOVA.
The SW-test is an alternative for the Kolmogorov-Smirnov test. This tutorial shows how to run and interpret it in SPSS.
Statistical significance is roughly the probability of finding your data under some null hypothesis.
If this probability (or “p”) is low -usually p < 0.05- then your data contradict your null hypothesis. In this case, you conclude that the hypothesis is not true.
A Spearman rank correlation is a number between -1 and +1 that indicates to what extent 2 variables are monotonously related.
This tutorial quickly walks you through the basics such as assumptions, significance levels, software and more.
An independent samples t-test examines if 2 populations have equal means on some variable.
Example: do Dutch women have the same mean salary as Dutch men?
This tutorial quickly walks you through the basics such as the assumptions, null hypothesis and effect size for this test.
A one-sample t-test examines if a population mean is likely to be x: some hypothesized value.
Example: do the pupils from my school have a mean IQ score of 100?
This tutorial quickly walks you through the basics for this test, including assumptions, formulas and effect size.
A paired samples t-test examines if 2 variables have equal means in some population.
Example: were the mean salaries over 2018 and 2019 equal for all Dutch citizens?
This tutorial quickly walks you through the correct steps for running this test in SPSS.
Z-scores are scores that have mean = 0 and standard deviation = 1.
All scores can be standardized into z-scores by subtracting the mean from each score and then dividing it by the standard deviation.
Such standardized scores may be easier to interpret than the original scores. Z-scores may or may not be normally distributed.
This z-test compares separate sample proportions to a hypothesized population proportion. This tool is freely downloadable and super easy to use.
A z-test for 2 independent proportions examines if some event occurs equally often in 2 subpopulations.
Example: do equal percentages of male and female students answer some exam question correctly?
This tutorial covers examples, assumptions and formulas and presents a simple Excel tool for running z-tests the easy way.