-
-
Notifications
You must be signed in to change notification settings - Fork 34
/
Copy pathloo2-lfo.Rmd
658 lines (539 loc) · 24.3 KB
/
loo2-lfo.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
---
title: "Approximate leave-future-out cross-validation for Bayesian time series models"
author: "Paul Bürkner, Jonah Gabry, Aki Vehtari"
date: "`r Sys.Date()`"
output:
html_vignette:
toc: yes
encoding: "UTF-8"
params:
EVAL: !r identical(Sys.getenv("NOT_CRAN"), "true")
---
<!--
%\VignetteEngine{knitr::rmarkdown}
%\VignetteIndexEntry{Approximate leave-future-out cross-validation for Bayesian time series models}
-->
```{r settings, child="children/SETTINGS-knitr.txt"}
```
```{r more-knitr-ops, include=FALSE}
knitr::opts_chunk$set(
cache = TRUE,
message = FALSE,
warning = FALSE
)
```
```{r, child="children/SEE-ONLINE.txt", eval = if (isTRUE(exists("params"))) !params$EVAL else TRUE}
```
## Introduction
One of the most common goals of a time series analysis is to use the observed
series to inform predictions for future observations. We will refer to this task
of predicting a sequence of $M$ future observations as $M$-step-ahead prediction
($M$-SAP). Fortunately, once we have fit a model and can sample from the
posterior predictive distribution, it is straightforward to generate predictions
as far into the future as we want. It is also straightforward to evaluate the
$M$-SAP performance of a time series model by comparing the predictions to the
observed sequence of $M$ future data points once they become available.
Unfortunately, we are often in the position of having to use a model to inform
decisions _before_ we can collect the future observations required for assessing
the predictive performance. If we have many competing models we may also need to
first decide which of the models (or which combination of the models) we should
rely on for predictions. In these situations the best we can do is to use
methods for approximating the expected predictive performance of our models
using only the observations of the time series we already have.
If there were no time dependence in the data or if the focus is to assess
the non-time-dependent part of the model, we could use methods like
leave-one-out cross-validation (LOO-CV). For a data set with $N$ observations,
we refit the model $N$ times, each time leaving out one of the $N$ observations
and assessing how well the model predicts the left-out observation. LOO-CV is
very expensive computationally in most realistic settings, but the Pareto
smoothed importance sampling (PSIS, Vehtari et al, 2017, 2019) algorithm provided by
the *loo* package allows for approximating exact LOO-CV with PSIS-LOO-CV.
PSIS-LOO-CV requires only a single fit of the full model and comes with
diagnostics for assessing the validity of the approximation.
With a time series we can do something similar to LOO-CV but, except in a few
cases, it does not make sense to leave out observations one at a time because
then we are allowing information from the future to influence predictions of the
past (i.e., times $t + 1, t+2, \ldots$ should not be used to predict for time
$t$). To apply the idea of cross-validation to the $M$-SAP case, instead of
leave-*one*-out cross-validation we need some form of leave-*future*-out
cross-validation (LFO-CV). As we will demonstrate in this case study, LFO-CV
does not refer to one particular prediction task but rather to various possible
cross-validation approaches that all involve some form of prediction for new
time series data. Like exact LOO-CV, exact LFO-CV requires refitting the model
many times to different subsets of the data, which is computationally very
costly for most
nontrivial examples, in particular for Bayesian analyses where refitting the
model means estimating a new posterior distribution rather than a point
estimate.
Although PSIS-LOO-CV provides an efficient approximation to exact LOO-CV, until
now there has not been an analogous approximation to exact LFO-CV that
drastically reduces the computational burden while also providing informative
diagnostics about the quality of the approximation. In this case study we
present PSIS-LFO-CV, an algorithm that typically only requires refitting the
time-series model a small number times and will make LFO-CV tractable for many
more realistic applications than previously possible.
More details can be found in our paper about approximate LFO-CV
(Bürkner, Gabry, & Vehtari, 2020), which is available as a preprint on arXiv (https://arxiv.org/abs/1902.06281).
## $M$-step-ahead predictions
Assume we have a time series of observations
$y = (y_1, y_2, \ldots, y_N)$
and let $L$ be the _minimum_ number of observations from the series that
we will require before making predictions for future data. Depending on the
application and how informative the data is, it may not be possible to make
reasonable predictions for $y_{i+1}$ based on $(y_1, \dots, y_{i})$ until $i$ is
large enough so that we can learn enough about the time series to predict future
observations. Setting $L=10$, for example, means that we will only assess
predictive performance starting with observation $y_{11}$, so that we
always have at least 10 previous observations to condition on.
In order to assess $M$-SAP performance we would like to compute the
predictive densities
$$
p(y_{i+1:M} \,|\, y_{1:i}) =
p(y_{i+1}, \ldots, y_{i + M} \,|\, y_{1},...,y_{i})
$$
for each $i \in \{L, \ldots, N - M\}$. The quantities
$p(y_{i+1:M} \,|\, y_{1:i})$ can be computed with the help of the posterior
distribution $p(\theta \,|\, y_{1:i})$ of the parameters $\theta$ conditional on
only the first $i$ observations of the time-series:
$$
p(y_{i+1:M} \,| \, y_{1:i}) =
\int p(y_{i+1:M} \,| \, y_{1:i}, \theta) \, p(\theta\,|\,y_{1:i}) \,d\theta.
$$
Having obtained $S$ draws $(\theta_{1:i}^{(1)}, \ldots, \theta_{1:i}^{(S)})$
from the posterior distribution $p(\theta\,|\,y_{1:i})$, we can estimate
$p(y_{i+1:M} | y_{1:i})$ as
$$
p(y_{i+1:M} \,|\, y_{1:i}) \approx \frac{1}{S}\sum_{s=1}^S p(y_{i+1:M} \,|\, y_{1:i}, \theta_{1:i}^{(s)}).
$$
## Approximate $M$-SAP using importance-sampling {#approximate_MSAP}
Unfortunately, the math above makes use of the posterior distributions from many
different fits of the model to different subsets of the data. That is, to obtain
the predictive density $p(y_{i+1:M} \,|\, y_{1:i})$ requires fitting a model to
only the first $i$ data points, and we will need to do this for every value of
$i$ under consideration (all $i \in \{L, \ldots, N - M\}$).
To reduce the number of models that need to be fit for the purpose of obtaining
each of the densities $p(y_{i+1:M} \,|\, y_{1:i})$, we propose the following
algorithm. First, we refit the model using the first $L$ observations
of the time series and then perform a single exact $M$-step-ahead prediction
step for $p(y_{L+1:M} \,|\, y_{1:L})$.
Recall that $L$ is the minimum number of observations we have deemed
acceptable for making predictions (setting $L=0$ means the first data point will
be predicted only based on the prior). We define $i^\star = L$ as the current
point of refit. Next, starting with $i = i^\star + 1$, we
approximate each $p(y_{i+1:M} \,|\, y_{1:i})$ via
$$
p(y_{i+1:M} \,|\, y_{1:i}) \approx
\frac{ \sum_{s=1}^S w_i^{(s)}\, p(y_{i+1:M} \,|\, y_{1:i}, \theta^{(s)})}
{ \sum_{s=1}^S w_i^{(s)}},
$$
where $\theta^{(s)} = \theta^{(s)}_{1:i^\star}$ are draws from the
posterior distribution based on the first $i^\star$ observations
and $w_i^{(s)}$ are the PSIS weights obtained in two steps.
First, we compute the raw importance ratios
$$
r_i^{(s)} =
\frac{f_{1:i}(\theta^{(s)})}{f_{1:i^\star}(\theta^{(s)})}
\propto \prod_{j \in (i^\star + 1):i} p(y_j \,|\, y_{1:(j-1)}, \theta^{(s)}),
$$
and then stabilize them using PSIS. The
function $f_{1:i}$ denotes the posterior distribution based on the first $i$
observations, that is, $f_{1:i} = p(\theta \,|\, y_{1:i})$, with $f_{1:i^\star}$
defined analogously. The index set $(i^\star + 1):i$ indicates all observations
which are part of the data for the model $f_{1:i}$ whose predictive performance
we are trying to approximate but not for the actually fitted model
$f_{1:i^\star}$. The proportional statement arises from the fact that
we ignore the normalizing constants $p(y_{1:i})$ and $p(y_{1:i^\star})$
of the compared posteriors, which leads to a self-normalized variant of
PSIS (see Vehtari et al, 2017).
Continuing with the next observation, we gradually increase $i$ by $1$ (we move
forward in time) and repeat the process. At some observation $i$, the
variability of the importance ratios $r_i^{(s)}$ will become too large and
importance sampling will fail. We will refer to this particular value of $i$ as
$i^\star_1$. To identify the value of $i^\star_1$, we check for which value of
$i$ does the estimated shape parameter $k$ of the generalized Pareto
distribution first cross a certain threshold $\tau$ (Vehtari et al, 2019). Only
then do we refit the model using the observations up to $i^\star_1$ and restart
the process from there by setting $\theta^{(s)} = \theta^{(s)}_{1:i^\star_1}$
and $i^\star = i^\star_1$ until the next refit.
In some cases we may only need to refit once and in other cases we will find a
value $i^\star_2$ that requires a second refitting, maybe an $i^\star_3$ that
requires a third refitting, and so on. We refit as many times as is required
(only when $k > \tau$) until we arrive at observation $i = N - M$.
For LOO, we recommend to use a threshold of $\tau = 0.7$ (Vehtari et al, 2017, 2019)
and it turns out this is a reasonable threshold for LFO as well (Bürkner et al. 2020).
## Autoregressive models
Autoregressive (AR) models are some of the most commonly used time-series models.
An AR(p) model ---an autoregressive model of order $p$--- can be defined as
$$
y_i = \eta_i + \sum_{k = 1}^p \varphi_k y_{i - k} + \varepsilon_i,
$$
where $\eta_i$ is the linear predictor for the $i$th observation, $\phi_k$ are
the autoregressive parameters and $\varepsilon_i$ are pairwise independent
errors, which are usually assumed to be normally distributed with equal variance
$\sigma^2$. The model implies a recursive formula that allows for computing the
right-hand side of the above equation for observation $i$ based on the values of
the equations for previous observations.
## Case Study: Annual measurements of the level of Lake Huron
To illustrate the application of PSIS-LFO-CV for estimating expected $M$-SAP
performance, we will fit a model for 98 annual measurements of the water level
(in feet) of [Lake Huron](https://en.wikipedia.org/wiki/Lake_Huron) from the
years 1875--1972. This data set is found in the **datasets** R package, which is
installed automatically with **R**.
In addition to the **loo** package, for this analysis we will use the **brms**
interface to Stan to generate a Stan program and fit the model, and also the
**bayesplot** and **ggplot2** packages for plotting.
```{r pkgs, cache=FALSE}
library("loo")
library("brms")
library("bayesplot")
library("ggplot2")
color_scheme_set("brightblue")
theme_set(theme_default())
CHAINS <- 4
SEED <- 5838296
set.seed(SEED)
```
Before fitting a model, we will first put the data into a data frame and then
look at the time series.
```{r hurondata}
N <- length(LakeHuron)
df <- data.frame(
y = as.numeric(LakeHuron),
year = as.numeric(time(LakeHuron)),
time = 1:N
)
ggplot(df, aes(x = year, y = y)) +
geom_point(size = 1) +
labs(
y = "Water Level (ft)",
x = "Year",
title = "Water Level in Lake Huron (1875-1972)"
)
```
The above plot shows rather strong autocorrelation of the time-series as well as
some trend towards lower levels for later points in time.
We can specify an AR(4) model for these data using the **brms** package as
follows:
```{r fit, results = "hide"}
fit <- brm(
y ~ ar(time, p = 4),
data = df,
prior = prior(normal(0, 0.5), class = "ar"),
control = list(adapt_delta = 0.99),
seed = SEED,
chains = CHAINS
)
```
The model implied predictions along with the observed values can be plotted,
which reveals a rather good fit to the data.
```{r plotpreds, cache = FALSE}
preds <- posterior_predict(fit)
preds <- cbind(
Estimate = colMeans(preds),
Q5 = apply(preds, 2, quantile, probs = 0.05),
Q95 = apply(preds, 2, quantile, probs = 0.95)
)
ggplot(cbind(df, preds), aes(x = year, y = Estimate)) +
geom_smooth(aes(ymin = Q5, ymax = Q95), stat = "identity", linewidth = 0.5) +
geom_point(aes(y = y)) +
labs(
y = "Water Level (ft)",
x = "Year",
title = "Water Level in Lake Huron (1875-1972)",
subtitle = "Mean (blue) and 90% predictive intervals (gray) vs. observed data (black)"
)
```
To allow for reasonable predictions of future values, we will require at least
$L = 20$ historical observations (20 years) to make predictions.
```{r setL}
L <- 20
```
We first perform approximate leave-one-out cross-validation (LOO-CV) for the
purpose of later comparison with exact and approximate LFO-CV for the 1-SAP
case.
```{r loo1sap, cache = FALSE}
loo_cv <- loo(log_lik(fit)[, (L + 1):N])
print(loo_cv)
```
## 1-step-ahead predictions leaving out all future values
The most basic version of $M$-SAP is 1-SAP, in which we predict only one step
ahead. In this case, $y_{i+1:M}$ simplifies to $y_{i}$ and the LFO-CV algorithm
becomes considerably simpler than for larger values of $M$.
### Exact 1-step-ahead predictions
Before we compute approximate LFO-CV using PSIS we will first compute
exact LFO-CV for the 1-SAP case so we can use it as a benchmark later.
The initial step for the exact computation is to calculate the log-predictive
densities by refitting the model many times:
```{r exact_loglik, results="hide"}
loglik_exact <- matrix(nrow = ndraws(fit), ncol = N)
for (i in L:(N - 1)) {
past <- 1:i
oos <- i + 1
df_past <- df[past, , drop = FALSE]
df_oos <- df[c(past, oos), , drop = FALSE]
fit_i <- update(fit, newdata = df_past, recompile = FALSE)
loglik_exact[, i + 1] <- log_lik(fit_i, newdata = df_oos, oos = oos)[, oos]
}
```
Then we compute the exact expected log predictive density (ELPD):
```{r helpers}
# some helper functions we'll use throughout
# more stable than log(sum(exp(x)))
log_sum_exp <- function(x) {
max_x <- max(x)
max_x + log(sum(exp(x - max_x)))
}
# more stable than log(mean(exp(x)))
log_mean_exp <- function(x) {
log_sum_exp(x) - log(length(x))
}
# compute log of raw importance ratios
# sums over observations *not* over posterior samples
sum_log_ratios <- function(loglik, ids = NULL) {
if (!is.null(ids)) loglik <- loglik[, ids, drop = FALSE]
rowSums(loglik)
}
# for printing comparisons later
rbind_print <- function(...) {
round(rbind(...), digits = 2)
}
```
```{r exact1sap, cache = FALSE}
exact_elpds_1sap <- apply(loglik_exact, 2, log_mean_exp)
exact_elpd_1sap <- c(ELPD = sum(exact_elpds_1sap[-(1:L)]))
rbind_print(
"LOO" = loo_cv$estimates["elpd_loo", "Estimate"],
"LFO" = exact_elpd_1sap
)
```
We see that the ELPD from LFO-CV for 1-step-ahead predictions is lower than the
ELPD estimate from LOO-CV, which should be expected since LOO-CV is making use
of more of the time series. That is, since the LFO-CV approach only uses
observations from before the left-out data point but LOO-CV uses _all_ data
points other than the left-out observation, we should expect to see the larger
ELPD from LOO-CV.
### Approximate 1-step-ahead predictions
We compute approximate 1-SAP with refit at observations where
the Pareto $k$ estimate exceeds the threshold of $0.7$.
```{r setkthresh}
k_thres <- 0.7
```
The code becomes a little bit more involved as compared to the exact LFO-CV.
Note that we can compute exact 1-SAP at the refitting points, which comes with
no additional computational costs since we had to refit the model anyway.
```{r refit_loglik, results="hide"}
approx_elpds_1sap <- rep(NA, N)
# initialize the process for i = L
past <- 1:L
oos <- L + 1
df_past <- df[past, , drop = FALSE]
df_oos <- df[c(past, oos), , drop = FALSE]
fit_past <- update(fit, newdata = df_past, recompile = FALSE)
loglik <- log_lik(fit_past, newdata = df_oos, oos = oos)
approx_elpds_1sap[L + 1] <- log_mean_exp(loglik[, oos])
# iterate over i > L
i_refit <- L
refits <- L
ks <- NULL
for (i in (L + 1):(N - 1)) {
past <- 1:i
oos <- i + 1
df_past <- df[past, , drop = FALSE]
df_oos <- df[c(past, oos), , drop = FALSE]
loglik <- log_lik(fit_past, newdata = df_oos, oos = oos)
logratio <- sum_log_ratios(loglik, (i_refit + 1):i)
psis_obj <- suppressWarnings(psis(logratio))
k <- pareto_k_values(psis_obj)
ks <- c(ks, k)
if (k > k_thres) {
# refit the model based on the first i observations
i_refit <- i
refits <- c(refits, i)
fit_past <- update(fit_past, newdata = df_past, recompile = FALSE)
loglik <- log_lik(fit_past, newdata = df_oos, oos = oos)
approx_elpds_1sap[i + 1] <- log_mean_exp(loglik[, oos])
} else {
lw <- weights(psis_obj, normalize = TRUE)[, 1]
approx_elpds_1sap[i + 1] <- log_sum_exp(lw + loglik[, oos])
}
}
```
We see that the final Pareto-$k$-estimates are mostly well below the threshold
and that we only needed to refit the model a few times:
```{r plot_ks}
plot_ks <- function(ks, ids, thres = 0.6) {
dat_ks <- data.frame(ks = ks, ids = ids)
ggplot(dat_ks, aes(x = ids, y = ks)) +
geom_point(aes(color = ks > thres), shape = 3, show.legend = FALSE) +
geom_hline(yintercept = thres, linetype = 2, color = "red2") +
scale_color_manual(values = c("cornflowerblue", "darkblue")) +
labs(x = "Data point", y = "Pareto k") +
ylim(-0.5, 1.5)
}
```
```{r refitsummary1sap, cache=FALSE}
cat("Using threshold ", k_thres,
", model was refit ", length(refits),
" times, at observations", refits)
plot_ks(ks, (L + 1):(N - 1))
```
The approximate 1-SAP ELPD is remarkably similar to the exact 1-SAP ELPD
computed above, which indicates our algorithm to compute approximate 1-SAP
worked well for the present data and model.
```{r lfosummary1sap, cache = FALSE}
approx_elpd_1sap <- sum(approx_elpds_1sap, na.rm = TRUE)
rbind_print(
"approx LFO" = approx_elpd_1sap,
"exact LFO" = exact_elpd_1sap
)
```
Plotting exact against approximate predictions, we see that no approximation
value deviates far from its exact counterpart, providing further evidence for
the good quality of our approximation.
```{r plot1sap, cache = FALSE}
dat_elpd <- data.frame(
approx_elpd = approx_elpds_1sap,
exact_elpd = exact_elpds_1sap
)
ggplot(dat_elpd, aes(x = approx_elpd, y = exact_elpd)) +
geom_abline(color = "gray30") +
geom_point(size = 2) +
labs(x = "Approximate ELPDs", y = "Exact ELPDs")
```
We can also look at the maximum difference and average difference between
the approximate and exact ELPD calculations, which also indicate a ver close
approximation:
```{r diffs1sap, cache=FALSE}
max_diff <- with(dat_elpd, max(abs(approx_elpd - exact_elpd), na.rm = TRUE))
mean_diff <- with(dat_elpd, mean(abs(approx_elpd - exact_elpd), na.rm = TRUE))
rbind_print(
"Max diff" = round(max_diff, 2),
"Mean diff" = round(mean_diff, 3)
)
```
## $M$-step-ahead predictions leaving out all future values
To illustrate the application of $M$-SAP for $M > 1$, we next compute exact and
approximate LFO-CV for the 4-SAP case.
### Exact $M$-step-ahead predictions
The necessary steps are the same as for 1-SAP with the exception that the
log-density values of interest are now the sums of the log predictive densities
of four consecutive observations. Further, the stability of the PSIS
approximation actually stays the same for all $M$ as it only depends on the
number of observations we leave out, not on the number of observations we
predict.
```{r exact_loglikm, results="hide"}
M <- 4
loglikm <- matrix(nrow = ndraws(fit), ncol = N)
for (i in L:(N - M)) {
past <- 1:i
oos <- (i + 1):(i + M)
df_past <- df[past, , drop = FALSE]
df_oos <- df[c(past, oos), , drop = FALSE]
fit_past <- update(fit, newdata = df_past, recompile = FALSE)
loglik <- log_lik(fit_past, newdata = df_oos, oos = oos)
loglikm[, i + 1] <- rowSums(loglik[, oos])
}
```
```{r exact4sap, cache = FALSE}
exact_elpds_4sap <- apply(loglikm, 2, log_mean_exp)
(exact_elpd_4sap <- c(ELPD = sum(exact_elpds_4sap, na.rm = TRUE)))
```
### Approximate $M$-step-ahead predictions
Computing the approximate PSIS-LFO-CV for the 4-SAP case is a little bit more
involved than the approximate version for the 1-SAP case, although the
underlying principles remain the same.
```{r refit_loglikm, results="hide"}
approx_elpds_4sap <- rep(NA, N)
# initialize the process for i = L
past <- 1:L
oos <- (L + 1):(L + M)
df_past <- df[past, , drop = FALSE]
df_oos <- df[c(past, oos), , drop = FALSE]
fit_past <- update(fit, newdata = df_past, recompile = FALSE)
loglik <- log_lik(fit_past, newdata = df_oos, oos = oos)
loglikm <- rowSums(loglik[, oos])
approx_elpds_1sap[L + 1] <- log_mean_exp(loglikm)
# iterate over i > L
i_refit <- L
refits <- L
ks <- NULL
for (i in (L + 1):(N - M)) {
past <- 1:i
oos <- (i + 1):(i + M)
df_past <- df[past, , drop = FALSE]
df_oos <- df[c(past, oos), , drop = FALSE]
loglik <- log_lik(fit_past, newdata = df_oos, oos = oos)
logratio <- sum_log_ratios(loglik, (i_refit + 1):i)
psis_obj <- suppressWarnings(psis(logratio))
k <- pareto_k_values(psis_obj)
ks <- c(ks, k)
if (k > k_thres) {
# refit the model based on the first i observations
i_refit <- i
refits <- c(refits, i)
fit_past <- update(fit_past, newdata = df_past, recompile = FALSE)
loglik <- log_lik(fit_past, newdata = df_oos, oos = oos)
loglikm <- rowSums(loglik[, oos])
approx_elpds_4sap[i + 1] <- log_mean_exp(loglikm)
} else {
lw <- weights(psis_obj, normalize = TRUE)[, 1]
loglikm <- rowSums(loglik[, oos])
approx_elpds_4sap[i + 1] <- log_sum_exp(lw + loglikm)
}
}
```
Again, we see that the final Pareto-$k$-estimates are mostly well below the
threshold and that we only needed to refit the model a few times:
```{r refitsummary4sap, cache = FALSE}
cat("Using threshold ", k_thres,
", model was refit ", length(refits),
" times, at observations", refits)
plot_ks(ks, (L + 1):(N - M))
```
The approximate ELPD computed for the 4-SAP case is not as close to its exact
counterpart as in the 1-SAP case. In general, the larger $M$, the larger the
variation of the approximate ELPD around the exact ELPD. It turns out that the
ELPD estimates of AR-models with $M>1$ show particular variation due to their
predictions' dependency on other predicted values. In Bürkner et al. (2020) we
provide further explanation and simulations for these cases.
```{r lfosummary4sap, cache = FALSE}
approx_elpd_4sap <- sum(approx_elpds_4sap, na.rm = TRUE)
rbind_print(
"Approx LFO" = approx_elpd_4sap,
"Exact LFO" = exact_elpd_4sap
)
```
Plotting exact against approximate pointwise predictions confirms that, for a
few specific data points, the approximate predictions underestimate the exact
predictions.
```{r plot4sap, cache = FALSE}
dat_elpd_4sap <- data.frame(
approx_elpd = approx_elpds_4sap,
exact_elpd = exact_elpds_4sap
)
ggplot(dat_elpd_4sap, aes(x = approx_elpd, y = exact_elpd)) +
geom_abline(color = "gray30") +
geom_point(size = 2) +
labs(x = "Approximate ELPDs", y = "Exact ELPDs")
```
## Conclusion
In this case study we have shown how to do carry out exact and approximate
leave-future-out cross-validation for $M$-step-ahead prediction tasks. For the
data and model used in our example, the PSIS-LFO-CV algorithm provides reasonably
stable and accurate results despite not requiring us to refit the model nearly as
many times. For more details on approximate LFO-CV, we refer to Bürkner et al. (2020).
<br />
## References
Bürkner P. C., Gabry J., & Vehtari A. (2020). Approximate leave-future-out cross-validation for time series models. *Journal of Statistical Computation and Simulation*, 90(14):2499-2523. \doi:/10.1080/00949655.2020.1783262.
[Online](https://www.tandfonline.com/doi/full/10.1080/00949655.2020.1783262). [arXiv preprint](https://arxiv.org/abs/1902.06281).
Vehtari A., Gelman A., & Gabry J. (2017). Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. *Statistics and Computing*, 27(5), 1413--1432. \doi:10.1007/s11222-016-9696-4. [Online](https://link.springer.com/article/10.1007/s11222-016-9696-4). [arXiv preprint arXiv:1507.04544](https://arxiv.org/abs/1507.04544).
Vehtari, A., Simpson, D., Gelman, A., Yao, Y., and Gabry, J. (2019). Pareto smoothed importance sampling. [arXiv preprint arXiv:1507.02646](https://arxiv.org/abs/1507.02646).
<br />
## Appendix
### Appendix: Session information
```{r sessioninfo}
sessionInfo()
```
### Appendix: Licenses
* Code © 2018, Paul Bürkner, Jonah Gabry, Aki Vehtari (licensed under BSD-3).
* Text © 2018, Paul Bürkner, Jonah Gabry, Aki Vehtari (licensed under CC-BY-NC 4.0).