All models are wrong, but some are useful.George Box

Most links below take to notes and R scripts that I wrote to illustrate how to perform various tasks (chiefly statistics) in R. Other links lead to some of the many excellent websites that illustrate topics and tasks better than I would.

You are free to use my original material on this website for non-lucrative purposes. If you are planning to make money out of it, get in touch with me first so that we can arrange terms and conditions. I am also available as an R-gun for hire for statistical consultations and for teaching courses.



Univariate statistics refer to the set of tests and analyses used to analyse response variables in which every observation is expressed by a single value.


Screenshot 2021-04-02 at 18.02.52
  • Power analysis allows “to determine the sample size required to detect an effect of a given size with a given degree of confidence” (Clay Ford). Clay Ford’s vignette for Stephane Champely’s package pwr gives some clear examples and explanations (here). For power analysis in Mixed Effect models, see this and this article. Here is my example of a power analysis for a linear model and mixed-effect model.
  • On estimating marginal means in R.
  • In the previous scripts we’ve seen how to compare models using anova(), which performs an F-test on the residual variability of the compared models, or using AIC. Here are some considerations on AIC by Brian McGill.


GLMs allow to perform univariate statistics for NON-NORMALLY  distributed data and they represent an alternative to performing general linear models on transformed data. such as in the following examples:

  • Comparing two groups (count data) – the code in this example can easily be adapted to more complicated situations, such as comparing more than two groups or testing the effect of multiple categorical variables on a response variable in “count” format.
  • Another example with count data – here we look at a regression-like situation with non-Normal data.
  • Binary data such as survival data or presence/absence: see this worked example by Gavin Simpson, where he also shows how to compute confidence intervals correctly.

Further useful resources:

  • On the difference between using GLMs or using LMs on transformed data, see here and here.
  • On how to compute Confidence Intervals, see here and here.
  • On how to calculate the SE of a quantity on the original scale knowing its SE on the transformed scale, see here (delta method).


Mixed-effect models (MEMs) are useful to deal with unbalanced study designs and/or with non-independent data. In the context of MEMs, explanatory variables are distinguished in fixed effects and random effects. As a practical and brutal definition, a model’s fixed effects are the explanatory variables the effect of which we are interested to quantify explicitly; a model’s random effects are variables that are not the main focus of our study but that we want to account for because we expect them to explain some of the residual variability (more on fixed and random effects: here, here). We can think of random effects as sources of background noise that is important to estimate in order to obtain a more accurate estimate of the fixed effects. Mixed-effect models allow to specify explanatory variables as fixed or random, which are then modelled accordingly.


A special case of data non-independence is represented by the phylogenetic relationship among species. Let’s say that we want to study the correlation between dietary habits and mean body weight in mammals. The body weight of different species may be affected by their dietary habits, but their phylogenetic relationship may also matter – closely related species may have similar mean body weight just because of their common evolutionary history. To study the correlation between dietary habits and mean body weight in mammals, one has to account for phylogenetic signal. One possibility is to use MEMs with nested random effects, such as:

MEM.model <- lme4::lmer(body.weigth ~ dietary.pref + (1 + dietary.pref | family/genus/species), 
data = mammals) # "mammals" is an imaginary dataset. It would have to contain, for each data point, information about body.weigth and dietary.pref as well as the family/genus/species to which they belong.

A more specific alternative is to integrate phylogenetic information into the analysis. This is something I have only dabbled in, so consider the following notes as a starting point (updated to 2021). Phylogenetic generalized least squares (PGLS) allow to inform the model about the autocorrelation between taxonomic units using existing phylogenetic trees. Here is an example where they look at the relationship between wing length and tarsus length among Geospiza finch species. This approach only allows for one data point for each leaf of the phylogenetic tree. Package phyr can perform generalised mixed-effect models that account for phylogenetic relatedness and can deal with data sets in which every taxonomical unit is represented by multiple data points (see here and here).


I used to be dismissive about GAMs, seeing them as a glorified smoothing function. In a way they are, but they can accomplish a lot: they can model and compare strongly non-linear and asymmetric patterns, they can decompose the relative contribution of continuous and categorical predictors to the observed trends, etc. There are even ways to include random effects and to account for autocorrelation in GAMs. Extremely useful resources for getting started with GAMs are here, here, and in Gavin Simpson’s answer to this question I asked on CrossValidated. For further detail, see this paper.



Multivariate statistics is used to analyse response variables in which every observation consists in a set of values.