Skip to content

Commit fb49a19

Browse files
committed
draft vignette
1 parent 63176fd commit fb49a19

File tree

1 file changed

+204
-0
lines changed

1 file changed

+204
-0
lines changed

WIP/check_model_logistic.Rmd

+204
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,204 @@
1+
---
2+
title: "Checking model assumption - logistic regression models"
3+
output:
4+
rmarkdown::html_vignette:
5+
toc: true
6+
fig_width: 10.08
7+
fig_height: 6
8+
tags: [r, performance, r2]
9+
vignette: >
10+
\usepackage[utf8]{inputenc}
11+
%\VignetteIndexEntry{Checking model assumption - logistic regression models}
12+
%\VignetteEngine{knitr::rmarkdown}
13+
editor_options:
14+
chunk_output_type: console
15+
---
16+
17+
```{r , include=FALSE}
18+
library(knitr)
19+
library(performance)
20+
options(knitr.kable.NA = "")
21+
knitr::opts_chunk$set(
22+
comment = ">",
23+
message = FALSE,
24+
warning = FALSE,
25+
out.width = "100%",
26+
dpi = 450
27+
)
28+
options(digits = 2)
29+
30+
pkgs <- c("see", "ggplot2", "datawizard", "parameters")
31+
successfully_loaded <- vapply(pkgs, requireNamespace, FUN.VALUE = logical(1L), quietly = TRUE)
32+
can_evaluate <- all(successfully_loaded)
33+
34+
if (can_evaluate) {
35+
knitr::opts_chunk$set(eval = TRUE)
36+
vapply(pkgs, require, FUN.VALUE = logical(1L), quietly = TRUE, character.only = TRUE)
37+
} else {
38+
knitr::opts_chunk$set(eval = FALSE)
39+
}
40+
```
41+
42+
# Make sure your model inference is accurate!
43+
44+
Model diagnostics is crucial, because parameter estimation, p-values and confidence interval depend on correct model assumptions as well as on the data. If model assumptions are violated, estimates can be statistically significant "even if the effect under study is null" (_Gelman/Greenland 2019_).
45+
46+
There are several problems associated with model diagnostics. Different types of models require different checks. For instance, normally distributed residuals are assumed to apply for linear regression, but is no appropriate assumption for logistic regression. Furthermore, it is recommended to carry out visual inspections, i.e. to generate and inspect so called diagnostic plots of model assumptions - formal statistical tests are often too strict and warn of violation of the model assumptions, although everything is fine within a certain tolerance range. But how should such diagnostic plots be interpreted? And if violations have been detected, how to fix them?
47+
48+
This vignette introduces the `check_model()` function of the **performance** package, shows how to use this function for logistic regression models and how the resulting diagnostic plots should be interpreted. Furthermore, recommendations are given how to address possible violations of model assumptions.
49+
50+
Most plots seen here can also be generated by their dedicated functions, e.g.:
51+
52+
- Posterior predictive checks: `check_predictions()`
53+
- Homogeneity of variance: `check_heteroskedasticity()`
54+
- Normality of residuals: `check_normality()`
55+
- Multicollinearity: `check_collinearity()`
56+
- Influential observations: `check_outliers()`
57+
- Binned residuals: `binned_residuals()`
58+
- Check for overdispersion: `check_overdispersion()`
59+
60+
# Logistic regression models: Are all assumptions met?
61+
62+
We start with a simple example for a logistic regression model.
63+
64+
```{r}
65+
data(Titanic)
66+
d <- as.data.frame(Titanic)
67+
d <- tidyr::uncount(d, Freq)
68+
m1 <- glm(Survived ~ Class + Sex + Age, data = d, family = binomial())
69+
```
70+
71+
Before we go into details of the diagnostic plots, let's first look at the summary table.
72+
73+
```{r eval=successfully_loaded["parameters"]}
74+
library(parameters)
75+
model_parameters(m1)
76+
```
77+
78+
There is nothing suspicious so far. Now let's start with model diagnostics. We use the `check_model()` function, which provides an overview with the most important and appropriate diagnostic plots for the model under investigation.
79+
80+
```{r eval=all(successfully_loaded[c("see", "ggplot2")]), fig.height=11}
81+
library(performance)
82+
check_model(m1)
83+
```
84+
85+
Now let's take a closer look for each plot. To do so, we ask `check_model()` to return a single plot for each check, instead of arranging them in a grid. We can do so using the `panel` argument. This returns a list of *ggplot* plots.
86+
87+
```{r eval=all(successfully_loaded[c("see", "ggplot2")])}
88+
# return a list of single plots
89+
diagnostic_plots <- plot(check_model(m1, panel = FALSE))
90+
```
91+
92+
## Posterior predictive checks
93+
94+
The first plot is based on `check_predictions()`. Posterior predictive checks can be used to "look for systematic discrepancies between real and simulated data" (_Gelman et al. 2014, p. 169_). It helps to see whether the type of model (distributional family) fits well to the data (_Gelman and Hill, 2007, p. 158_).
95+
96+
```{r eval=all(successfully_loaded[c("see", "ggplot2")])}
97+
# posterior predicive checks
98+
diagnostic_plots[[1]]
99+
```
100+
101+
In case of logistic regression our count models, the plot shows by default _dots_ for the observed and simulated data, not _lines_ (as for linear models). The blue dots are simulated data based on the model, if the model were true and distributional assumptions met. The green dots represents the actual observed data of the response variable.
102+
103+
This plot looks good, because the green dots are inside the range of the blue error bars, and thus we would not assume any violations of model assumptions here.
104+
105+
### How to fix this?
106+
107+
The best way, if there are serious concerns that the model does not fit well to the data, is to use a different type (family) of regression models.
108+
109+
## Binned residuals
110+
111+
112+
```{r eval=all(successfully_loaded[c("see", "ggplot2")])}
113+
# linearity
114+
diagnostic_plots[[2]]
115+
```
116+
117+
118+
### How to fix this?
119+
120+
121+
## Influential observations - outliers
122+
123+
Outliers can be defined as particularly influential observations, and this plot helps detecting those outliers. Cook's distance (_Cook 1977_, _Cook & Weisberg 1982_) is used to define outliers, i.e. any point in this plot that falls outside of Cook's distance (the dashed lines) is considered an influential observation.
124+
125+
```{r eval=all(successfully_loaded[c("see", "ggplot2")])}
126+
# influential observations - outliers
127+
diagnostic_plots[[4]]
128+
```
129+
130+
In our example, everything looks well.
131+
132+
### How to fix this?
133+
134+
Dealing with outliers is not straightforward, as it is not recommended to automatically discard any observation that has been marked as "an outlier". Rather, your _domain knowledge_ must be involved in the decision whether to keep or omit influential observation. A helpful heuristic is to distinguish between error outliers, interesting outliers, and random outliers (_Leys et al. 2019_). _Error outliers_ are likely due to human error and should be corrected before data analysis. _Interesting outliers_ are not due to technical error and may be of theoretical interest; it might thus be relevant to investigate them further even though they should be removed from the current analysis of interest. _Random outliers_ are assumed to be due to chance alone and to belong to the correct distribution and, therefore, should be retained.
135+
136+
## Multicollinearity
137+
138+
This plot checks for potential collinearity among predictors. In a nutshell multicollinearity means that once you know the effect of one predictor, the value of knowing the other predictor is rather low. Multicollinearity might arise when a third, unobserved variable has a causal effect on each of the two predictors that are associated with the outcome. In such cases, the actual relationship that matters would be the association between the unobserved variable and the outcome.
139+
140+
Multicollinearity should not be confused with a raw strong correlation between predictors. What matters is the association between one or more predictor variables, *conditional on the other variables in the model*.
141+
142+
If multicollinearity is a problem, the model seems to suggest that the predictors in question don't seems to be reliably associated with the outcome (low estimates, high standard errors), although these predictors actually are strongly associated with the outcome, i.e. indeed might have strong effect (_McElreath 2020, chapter 6.1_).
143+
144+
```{r eval=all(successfully_loaded[c("see", "ggplot2")])}
145+
# multicollinearity
146+
diagnostic_plots[[5]]
147+
```
148+
149+
The variance inflation factor (VIF) indicates the magnitude of multicollinearity of model terms. The thresholds for low, moderate and high collinearity are VIF values less than 5, between 5 and 10 and larger than 10, respectively (_James et al. 2013_). Note that these thresholds, although commonly used, are also criticized for being too high. _Zuur et al. (2010)_ suggest using lower values, e.g. a VIF of 3 or larger may already no longer be considered as "low".
150+
151+
Our model clearly suffers from multicollinearity, as all predictors have high VIF values.
152+
153+
### How to fix this?
154+
155+
Usually, predictors with (very) high VIF values should be removed from the model to fix multicollinearity. Some caution is needed for interaction terms. If interaction terms are included in a model, high VIF values are expected. This portion of multicollinearity among the component terms of an interaction is also called "inessential ill-conditioning", which leads to inflated VIF values that are typically seen for models with interaction terms _(Francoeur 2013)_. In such cases, re-fit your model without interaction terms and check this model for collinearity among predictors.
156+
157+
## Normality of residuals
158+
159+
In linear regression, residuals should be normally distributed. This can be checked using so-called Q-Q plots (quantile-quantile plot) to compare the shapes of distributions. This plot shows the quantiles of the studentized residuals versus fitted values.
160+
161+
Usually, dots should fall along the green reference line. If there is some deviation (mostly at the tails), this indicates that the model doesn't predict the outcome well for the range that shows larger deviations from the reference line. In such cases, inferential statistics like the p-value or coverage of confidence intervals can be inaccurate.
162+
163+
```{r eval=all(successfully_loaded[c("see", "ggplot2")])}
164+
# normally distributed residuals
165+
diagnostic_plots[[6]]
166+
```
167+
168+
In our example, we see that most data points are ok, except some observations at the tails. Whether any action is needed to fix this or not can also depend on the results of the remaining diagnostic plots. If all other plots indicate no violation of assumptions, some deviation of normality, particularly at the tails, can be less critical.
169+
170+
### How to fix this?
171+
172+
Here are some remedies to fix non-normality of residuals, according to _Pek et al. 2018_.
173+
174+
1. For large sample sizes, the assumption of normality can be relaxed due to the central limit theorem - no action needed.
175+
176+
2. Calculating heteroscedasticity-consistent standard errors can help. See section **Homogeneity of variance** for details.
177+
178+
3. Bootstrapping is another alternative to resolve issues with non-normally residuals. Again, this can be easily done using the **parameters** package, e.g. `parameters::model_parameters(m1, bootstrap = TRUE)` or [`parameters::bootstrap_parameters()`](https://easystats.github.io/parameters/reference/bootstrap_parameters.html).
179+
180+
# References
181+
182+
Brooks ME, Kristensen K, Benthem KJ van, Magnusson A, Berg CW, Nielsen A, et al. glmmTMB Balances Speed and Flexibility Among Packages for Zero-inflated Generalized Linear Mixed Modeling. The R Journal. 2017;9: 378-400.
183+
184+
Cook RD. Detection of influential observation in linear regression. Technometrics. 1977;19(1): 15-18.
185+
186+
Cook RD and Weisberg S. Residuals and Influence in Regression. London: Chapman and Hall, 1982.
187+
188+
Francoeur RB. Could Sequential Residual Centering Resolve Low Sensitivity in Moderated Regression? Simulations and Cancer Symptom Clusters. Open Journal of Statistics. 2013:03(06), 24-44.
189+
190+
Gelman A, Carlin JB, Stern HS, Dunson DB, Vehtari A, and Rubin DB. Bayesian data analysis. (Third edition). CRC Press, 2014
191+
192+
Gelman A, Greenland S. Are confidence intervals better termed "uncertainty intervals"? BMJ. 2019;l5381. doi:10.1136/bmj.l5381
193+
194+
Gelman A, and Hill J. Data analysis using regression and multilevel/hierarchical models. Cambridge; New York. Cambridge University Press, 2007
195+
196+
James, G., Witten, D., Hastie, T., and Tibshirani, R. (eds.).An introduction to statistical learning: with applications in R. New York: Springer, 2013
197+
198+
Leys C, Delacre M, Mora YL, Lakens D, Ley C. How to Classify, Detect, and Manage Univariate and Multivariate Outliers, With Emphasis on Pre-Registration. International Review of Social Psychology, 2019
199+
200+
McElreath, R. Statistical rethinking: A Bayesian course with examples in R and Stan. 2nd edition. Chapman and Hall/CRC, 2020
201+
202+
Pek J, Wong O, Wong ACM. How to Address Non-normality: A Taxonomy of Approaches, Reviewed, and Illustrated. Front Psychol (2018) 9:2104. doi: 10.3389/fpsyg.2018.02104
203+
204+
Zuur AF, Ieno EN, Elphick CS. A protocol for data exploration to avoid common statistical problems: Data exploration. Methods in Ecology and Evolution (2010) 1:3-14.

0 commit comments

Comments
 (0)