-
Notifications
You must be signed in to change notification settings - Fork 19
/
hw_week7.Rmd
884 lines (662 loc) · 44.8 KB
/
hw_week7.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
```{r dlm-setup, include=FALSE, purl=FALSE}
knitr::opts_knit$set(unnamed.chunk.label = "dlm-")
knitr::opts_chunk$set(echo = TRUE, comment=NA, cache=TRUE, tidy.opts=list(width.cutoff=60), tidy=TRUE, fig.align='center', out.width='80%')
```
# Dynamic linear models {#chap-dlm-dynamic-linear-models}
\chaptermark{Dynamic linear models}
Dynamic linear models (DLMs) are a type of linear regression model, wherein the parameters are treated as time-varying rather than static. DLMs are used commonly in econometrics, but have received less attention in the ecological literature [c.f. @Lamonetal1998; @ScheuerellWilliams2005]. Our treatment of DLMs is rather cursory---we direct the reader to excellent textbooks by @Poleetal1994 and @Petrisetal2009 for more in-depth treatments of DLMs. The former focuses on Bayesian estimation whereas the latter addresses both likelihood-based and Bayesian estimation methods.
A script with all the R code in the chapter can be downloaded [here](./Rcode/DLM.R). The Rmd for this chapter can be downloaded [here](./Rmds/DLM.Rmd).
### Data {-}
Most of the data used in the chapter are from the **MARSS** package. Install the package, if needed, and load:
```{r dlm-loadpackages, warning=FALSE, message=FALSE}
library(MARSS)
```
The problem set uses an additional data set on spawners and recruits (`KvichakSockeye`) in the `atsalibrary` package.
## Overview {#sec-dlm-overview}
We begin our description of DLMs with a *static* regression model, wherein the $i^{th}$ observation (response variable) is a linear function of an intercept, predictor variable(s), and a random error term. For example, if we had one predictor variable ($f$), we could write the model as
\begin{equation}
(\#eq:dlm-lm)
y_i = \alpha + \beta f_i + v_i,
\end{equation}
where the $\alpha$ is the intercept, $\beta$ is the regression slope, $f_i$ is the predictor variable matched to the $i^{th}$ observation ($y_i$), and $v_i \sim \text{N}(0,r)$. It is important to note here that there is no implicit ordering of the index $i$. That is, we could shuffle any/all of the $(y_i, f_i)$ pairs in our dataset with no effect on our ability to estimate the model parameters.
We can write Equation \@ref(eq:dlm-lm) using matrix notation, as
\begin{align}
(\#eq:dlm-lmVec)
y_i &= \begin{bmatrix}1&f_i\end{bmatrix}
\begin{bmatrix}\alpha\\ \beta\end{bmatrix} + v_i \nonumber\\
&= \mathbf{F}_i^{\top}\boldsymbol{\theta} + v_i,
\end{align}
where $\mathbf{F}_i^{\top} = \begin{bmatrix}1&f_i\end{bmatrix}$ and $\boldsymbol{\theta} = \begin{bmatrix}\alpha\\ \beta\end{bmatrix}$.
In a DLM, however, the regression parameters are _dynamic_ in that they "evolve" over time. For a single observation at time $t$, we can write
\begin{equation}
(\#eq:dlm-dlm_1)
y_t = \mathbf{F}_{t}^{\top}\boldsymbol{\theta}_t + v_t,
\end{equation}
where $\mathbf{F}_t$ is a column vector of predictor variables (covariates) at time $t$, $\boldsymbol{\theta}_t$ is a column vector of regression parameters at time $t$ and $v_{t}\sim\N(0,r)$. This formulation presents two features that distinguish it from Equation \@ref(eq:dlm-lmVec). First, the observed data are explicitly time ordered (i.e., $\mathbf{y}=\lbrace{y_1,y_2,y_3,\dots,y_T}\rbrace$), which means we expect them to contain implicit information. Second, the relationship between the observed datum and the predictor variables are unique at every time $t$ (i.e., $\boldsymbol{\theta}=\lbrace{\boldsymbol{\theta}_1,\boldsymbol{\theta}_2,\boldsymbol{\theta}_3,\dots,\boldsymbol{\theta}_T}\rbrace$).
However, closer examination of Equation \@ref(eq:dlm-dlm_1) reveals an apparent complication for parameter estimation. With only one datum at each time step $t$, we could, at best, estimate only one regression parameter, and even then, the 1:1 correspondence between data and parameters would preclude any estimation of parameter uncertainty. To address this shortcoming, we return to the time ordering of model parameters. Rather than assume the regression parameters are independent from one time step to another, we instead model them as an autoregressive process where
\begin{equation}
(\#eq:dlm-dlm2)
\boldsymbol{\theta}_t = \mathbf{G}_t\boldsymbol{\theta}_{t-1} + \mathbf{w}_t,
\end{equation}
$\mathbf{G}_t$ is the parameter "evolution" matrix, and $\mathbf{w}_t$ is a vector of process errors, such that $\mathbf{w}_t \sim \MVN(\mathbf{0},\mathbf{Q})$. The elements of $\mathbf{G}_t$ may be known and fixed *a priori*, or unknown and estimated from the data. Although we could allow $\mathbf{G}_t$ to be time-varying, we will typically assume that it is time invariant or assume $\mathbf{G}_t$ is an $m \times m$ identity matrix $\mathbf{I}_m$.
The idea is that the evolution matrix $\mathbf{G}_t$ deterministically maps the parameter space from one time step to the next, so the parameters at time $t$ are temporally related to those before and after. However, the process is corrupted by stochastic error, which amounts to a degradation of information over time. If the diagonal elements of $\mathbf{Q}$ are relatively large, then the parameters can vary widely from $t$ to $t+1$. If $\mathbf{Q} = \mathbf{0}$, then $\boldsymbol{\theta}_1=\boldsymbol{\theta}_2=\boldsymbol{\theta}_T$ and we are back to the static model in Equation \@ref(eq:dlm-lm).
## DLM in state-space form {#sec-dlm-state-space-form}
A DLM is a state-space model and can be written in MARSS form:
\begin{equation}
(\#eq:dlm-ssform)
y_t = \mathbf{F}^{\top}_t \boldsymbol{\theta}_t + e_t \\
\boldsymbol{\theta}_t = \mathbf{G} \boldsymbol{\theta}_{t-1} + \mathbf{w}_t \\
\Downarrow \\
y_t = \mathbf{Z}_t \mathbf{x}_t + v_t \\
\mathbf{x}_t = \mathbf{B} \mathbf{x}_{t-1} + \mathbf{w}_t
\end{equation}
Note that DLMs include predictor variables (covariates) in the observation equation much differently than other forms of MARSS models. In a DLM, $\mathbf{Z}$ is a matrix of predictor variables and $\mathbf{x}_t$ are the time-evolving regression parameters.
\begin{equation}
y_t = \boxed{\mathbf{Z}_t \mathbf{x}_t} + v_t.
\end{equation}
In many other MARSS models, $\mathbf{d}_t$ is a time-varying column vector of covariates and $\mathbf{D}$ is the matrix of covariate-effect parameters.
\begin{equation}
y_t = \mathbf{Z}_t \mathbf{x}_t + \boxed{\mathbf{D} \mathbf{d}_t} +v_t.
\end{equation}
## Stochastic level models {#sec-dlm-simple-examples}
The most simple DLM is a stochastic level model, where the level is a random walk without drift, and this level is observed with error. We will write it first in using regression notation where the intercept is $\alpha$ and then in MARSS notation. In the latter, $\alpha_t=x_t$.
\begin{equation}
(\#eq:dlm-stoch-level)
y_t = \alpha_t + e_t \\
\alpha_t = \alpha_{t-1} + w_t \\
\Downarrow \\
y_t = x_t + v_t \\
x_t = x_{t-1} + w_t
\end{equation}
Using this model, we can model the Nile River level and fit the model using `MARSS()`.
```{r dlm-nile-fit}
## load Nile flow data
data(Nile, package = "datasets")
## define model list
mod_list <- list(B = "identity", U = "zero", Q = matrix("q"),
Z = "identity", A = matrix("a"), R = matrix("r"))
## fit the model with MARSS
fit <- MARSS(matrix(Nile, nrow = 1), mod_list)
```
```{r dlm-nile-fit-plot, echo=FALSE}
plot.ts(Nile, las = 1, lwd = 2,
xlab = "Year", ylab = "Flow of the River Nile")
lines(seq(start(Nile)[1], end(Nile)[1]),
lwd = 2, t(fit$states), col = "blue")
lines(seq(start(Nile)[1], end(Nile)[1]), t(fit$states + 2*fit$states.se),
lwd = 2, lty = "dashed", col = "blue")
lines(seq(start(Nile)[1], end(Nile)[1]), t(fit$states - 2*fit$states.se),
lwd = 2, lty = "dashed", col = "blue")
```
### Stochastic level with drift
We can add a drift term to the level model to allow the level to tend upward or downward with a deterministic rate $\eta$. This is a random walk with bias.
\begin{equation}
(\#eq:dlm-stoch-level-drift)
y_t = \alpha_t + e_t \\
\alpha_t = \alpha_{t-1} + \eta + w_t \\
\Downarrow \\
y_t = x_t + v_t \\
x_t = x_{t-1} + u + w_t
\end{equation}
We can allow that the drift term $\eta$ evolves over time along with the level. In this case, $\eta$ is modeled as a random walk along with $\alpha$. This model is
\begin{equation}
(\#eq:dlm-stoch-level-drift-2)
y_t = \alpha_t + e_t \\
\alpha_t = \alpha_{t-1} + \eta_{t-1} + w_{\alpha,t} \\
\eta_t = \eta_{t-1} + w_{\eta,t}
\end{equation}
Equation \@ref(eq:dlm-stoch-level-drift-2) can be written in matrix form as:
\begin{equation}
(\#eq:dlm-stoch-level-drift-3)
y_t = \begin{bmatrix}1&0\end{bmatrix}\begin{bmatrix}
\alpha \\
\eta
\end{bmatrix}_t + v_t \\
\begin{bmatrix}
\alpha \\
\eta
\end{bmatrix}_t = \begin{bmatrix}
1 & 1 \\
0 & 1
\end{bmatrix}\begin{bmatrix}
\alpha \\
\eta
\end{bmatrix}_{t-1} + \begin{bmatrix}
w_{\alpha} \\
w_{\eta}
\end{bmatrix}_t
\end{equation}
Equation \@ref(eq:dlm-stoch-level-drift-3) is a MARSS model.
\begin{equation}
y_t = \mathbf{Z}\mathbf{x}_t + v_t \\
\mathbf{x}_t = \mathbf{B}\mathbf{x}_{t-1} + \mathbf{w}_t
\end{equation}
where $\mathbf{B}=\begin{bmatrix} 1 & 1 \\ 0 & 1\end{bmatrix}$, $\mathbf{x}=\begin{bmatrix}\alpha \\ \eta\end{bmatrix}$ and $\mathbf{Z}=\begin{bmatrix}1&0\end{bmatrix}$.
See Section \@ref(sec-uss-examples-using-the-nile-river-data) for more discussion of stochastic level models and Section \@ref() to see how to fit this model with the `StructTS(sec-uss-the-structts-function)` function in the **stats** package.
## Stochastic regression model {#sec-dlm-regression}
The stochastic level models in Section \@ref(sec-dlm-simple-examples) do not have predictor variables (covariates). Let's add one predictor variable $f_t$ and write a simple DLM where the intercept $\alpha$ and slope $\beta$ are stochastic. We will specify that $\alpha$ and $\beta$ evolve according to a simple random walk. Normally $x$ is used for the predictor variables in a regression model, but we will avoid that since we are using $x$ for the state equation in a state-space model. This model is
\begin{equation}
(\#eq:dlm-stoch-regression-1)
y_t = \alpha_t + \beta_t f_t + v_t \\
\alpha_t = \alpha_{t-1} + w_{\alpha,t} \\
\beta_t = \beta_{t-1} + w_{\beta,t}
\end{equation}
Written in matrix form, the model is
\begin{equation}
(\#eq:dlm-stoch-regression-2)
y_t = \begin{bmatrix}
1 &
f_t
\end{bmatrix}\begin{bmatrix}
\alpha \\
\beta
\end{bmatrix}_t + v_t \\
\begin{bmatrix}
\alpha \\
\beta
\end{bmatrix}_t =
\begin{bmatrix}
\alpha \\
\beta
\end{bmatrix}_{t-1} +
\begin{bmatrix}
w_{\alpha} \\
w_{\beta}
\end{bmatrix}_t
\end{equation}
Equation \@ref(eq:dlm-stoch-regression-2) is a MARSS model:
\begin{equation}
y_t = \mathbf{Z}\mathbf{x}_t + v_t \\
\mathbf{x}_t = \mathbf{x}_{t-1} + \mathbf{w}_t
\end{equation}
where $\mathbf{x}=\begin{bmatrix}\alpha \\ \beta\end{bmatrix}$ and $\mathbf{Z}=\begin{bmatrix}1&f_t\end{bmatrix}$.
## DLM with seasonal effect {#sec-dlm-with-season}
Let's add a simple fixed quarter effect to the regression model:
\begin{equation}
(\#eq:dlm-seasonal-effect)
y_t = \alpha_t + \beta_t x_t + \gamma_{qtr} + e_t \\
\gamma_{qtr} =
\begin{cases}
\gamma_{1} & \text{if } qtr = 1 \\
\gamma_{2} & \text{if } qtr = 2 \\
\gamma_{3} & \text{if } qtr = 3 \\
\gamma_{4} & \text{if } qtr = 4
\end{cases}
\end{equation}
We can write Equation \@ref(eq:dlm-seasonal-effect) in matrix form. In our model for $\gamma$, we will set the variance to 0 so that the $\gamma$ does not change with time.
\begin{equation}
(\#eq:dlm-seasonal-effect-2)
y_t =
\begin{bmatrix}
1 & x_t & 1
\end{bmatrix}
\begin{bmatrix}
\alpha \\ \beta \\ \gamma_{qtr}
\end{bmatrix}_t
+ e_t \\
\begin{bmatrix}
\alpha \\ \beta \\ \gamma_{qtr}
\end{bmatrix}_t =
\begin{bmatrix}
\alpha \\ \beta \\ \gamma_{qtr}
\end{bmatrix}_{t-1}+
\begin{bmatrix}
w_{\alpha} \\ w_{\beta} \\ 0
\end{bmatrix}_t\\
\Downarrow \\
y_t = \mathbf{Z}_{t}\mathbf{x}_{t}+v_t \\
\mathbf{x}_{t} = \mathbf{x}_{t-1}+\mathbf{w}_{t}
\end{equation}
How do we select the right quarterly effect? Let's separate out the quarterly effects and add them to $\mathbf{x}$. We could then select the right $\gamma$ using 0s and 1s in the $\mathbf{Z}_t$ matrix. For example, if $t$ is in quarter 1, our model would be
\begin{equation}
(\#eq:dlm-seasonal-effect-3)
y_t =
\begin{bmatrix}
1 & x_t & 1 & 0 & 0 & 0
\end{bmatrix}
\begin{bmatrix}
\alpha_t \\ \beta_t \\ \gamma_1 \\ \gamma_2 \\ \gamma_3 \\ \gamma_4
\end{bmatrix} \\
\end{equation}
While if $t$ is in quarter 2, the model is
\begin{equation}
(\#eq:dlm-seasonal-effect-4)
y_t =
\begin{bmatrix}
1 & x_t & 0 & 1 & 0 & 0
\end{bmatrix}
\begin{bmatrix}
\alpha_t \\ \beta_t \\ \gamma_1 \\ \gamma_2 \\ \gamma_3 \\ \gamma_4
\end{bmatrix} \\
\end{equation}
This would work, but we would have to have a different $\mathbf{Z}_t$ matrix and it might get cumbersome to keep track of the 0s and 1s. If we wanted the $\gamma$ to evolve with time, we might need to do this. However, if the $\gamma$ are fixed, i.e. the quarterly effect does not change over time, a less cumbersome approach is possible.
We could instead keep the $\mathbf{Z}_t$ matrix the same, but reorder the $\gamma_i$ within $\mathbf{x}$. If $t$ is in quarter 1,
\begin{equation}
(\#eq:dlm-seasonal-effect-5)
y_t =
\begin{bmatrix}
1 & x_t & 1 & 0 & 0 & 0
\end{bmatrix}
\begin{bmatrix}
\alpha_t \\ \beta_t \\ \gamma_1 \\ \gamma_2 \\ \gamma_3 \\ \gamma_4
\end{bmatrix} \\
\end{equation}
While if $t$ is in quarter 2,
\begin{equation}
(\#eq:dlm-seasonal-effect-6)
y_t =
\begin{bmatrix}
1 & x_t & 1 & 0 & 0 & 0
\end{bmatrix}
\begin{bmatrix}
\alpha_t \\ \beta_t \\ \gamma_2 \\ \gamma_3 \\ \gamma_4 \\ \gamma_1
\end{bmatrix} \\
\end{equation}
We can use a non-diagonal $\mathbf{G}$ to to shift the correct quarter effect within $\mathbf{x}$.
$$
\mathbf{G} =
\begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 & 0 & 0
\end{bmatrix}
$$
With this $\mathbf{G}$, the $\gamma$ rotate within $\mathbf{x}$ with each time step. If $t$ is in quarter 1, then $t+1$ is in quarter 2, and we want $\gamma_2$ to be in the 3rd row.
\begin{equation}
(\#eq:dlm-seasonal-effect-7)
\begin{bmatrix} \alpha \\ \beta \\ \gamma_2 \\ \gamma_3 \\ \gamma_4 \\ \gamma_1
\end{bmatrix}_{t+1} =
\begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 & 0 & 0
\end{bmatrix}
\begin{bmatrix}
\alpha \\ \beta \\ \gamma_1 \\ \gamma_2 \\ \gamma_3 \\ \gamma_4
\end{bmatrix}_{t} +
\begin{bmatrix}
w_{\alpha} \\ w_{\beta} \\ 0 \\0 \\ 0 \\ 0
\end{bmatrix}_{t}
\end{equation}
At $t+2$, we are in quarter 3 and $\gamma_3$ will be in row 3.
\begin{equation}
(\#eq:dlm-seasonal-effect-8)
\begin{bmatrix} \alpha \\ \beta \\ \gamma_3 \\ \gamma_4 \\ \gamma_1 \\ \gamma_2
\end{bmatrix}_{t+2} =
\begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 & 0 & 0
\end{bmatrix}
\begin{bmatrix}
\alpha \\ \beta \\ \gamma_2 \\ \gamma_3 \\ \gamma_4 \\ \gamma_1
\end{bmatrix}_{t+1} +
\begin{bmatrix}
w_{\alpha} \\ w_{\beta} \\ 0 \\0 \\ 0 \\ 0
\end{bmatrix}_{t}
\end{equation}
## Analysis of salmon survival {#sec-dlm-salmon-example}
Let's see an example of a DLM used to analyze real data from the literature. @ScheuerellWilliams2005 used a DLM to examine the relationship between marine survival of Chinook salmon and an index of ocean upwelling strength along the west coast of the USA. Upwelling brings cool, nutrient-rich waters from the deep ocean to shallower coastal areas. Scheuerell \& Williams hypothesized that stronger upwelling in April should create better growing conditions for phytoplankton, which would then translate into more zooplankton. In turn, juvenile salmon ("smolts") entering the ocean in May and June should find better foraging opportunities. Thus, for smolts entering the ocean in year $t$,
\begin{equation}
(\#eq:dlm-dlmSW1)
survival_t = \alpha_t + \beta_t f_t + v_t \text{ with } v_{t}\sim\N(0,r),
\end{equation}
and $f_t$ is the coastal upwelling index (cubic meters of seawater per second per 100 m of coastline) for the month of April in year $t$.
Both the intercept and slope are time varying, so
\begin{align}
(\#eq:dlm-dlmSW2)
\alpha_t &= \alpha_{t-1} + w_{\alpha,t} \text{ with } w_{\alpha,t} \sim \N(0,q_{\alpha})\\
\beta_t &= \beta_{t-1} + w_{\beta,t} \text{ with } w_{\beta,t} \sim \N(0,q_{\beta}).
\end{align}
If we define $\boldsymbol{\theta}_t = \begin{bmatrix}\alpha \\ \beta\end{bmatrix}_t$, $\mathbf{G}_t = \II$, $\mathbf{w}_t = \begin{bmatrix} w_\alpha \\ w_\beta\end{bmatrix}_t$, and $\mathbf{Q} = \begin{bmatrix}q_\alpha& 0 \\ 0&q_\beta\end{bmatrix}$, we get Equation \@ref(eq:dlm-dlm2). If we define $y_t = survival_t$ and $\mathbf{F}_t = \begin{bmatrix}1 \\ f_t\end{bmatrix}$, we can write out the full DLM as a state-space model with the following form:
\begin{equation}
(\#eq:dlm-dlmSW3)
y_t = \mathbf{F}_t^{\top}\boldsymbol{\theta}_t + v_t \text{ with } v_t\sim\N(0,r)\\
\boldsymbol{\theta}_t = \mathbf{G}_t\boldsymbol{\theta}_{t-1} + \mathbf{w}_t \text{ with } \mathbf{w}_t \sim \MVN(\mathbf{0},\mathbf{Q})\\
\boldsymbol{\theta}_0 = \pipi_0.
\end{equation}
Equation \@ref(eq:dlm-dlmSW3) is equivalent to our standard MARSS model:
\begin{equation}
(\#eq:dlm-MARSSdlm)
\mathbf{y}_t = \mathbf{Z}_t\mathbf{x}_t + \mathbf{a} + \mathbf{v}_t \text{ with } \mathbf{v}_t \sim \MVN(0,\mathbf{R}_t)\\
\mathbf{x}_t = \mathbf{B}_t\mathbf{x}_{t-1} + \mathbf{u} + \mathbf{w}_t \text{ with } \mathbf{w}_t \sim \MVN(0,\mathbf{Q}_t)\\
\mathbf{x}_0 = \pipi
\end{equation}
where $\mathbf{x}_t = \boldsymbol{\theta}_t$, $\mathbf{B}_t = \mathbf{G}_t$, $\mathbf{y}_t = y_t$ (i.e., $\mathbf{y}_t$ is 1 $\times$ 1), $\mathbf{Z}_t = \mathbf{F}_t^{\top}$, $\mathbf{a} = \mathbf{u} = \mathbf{0}$, and $\mathbf{R}_t = r$ (i.e., $\mathbf{R}_t$ is 1 $\times$ 1).
## Fitting with `MARSS()` {#sec-dlm-fitting-a-univariate-dlm-with-marss}
Now let's go ahead and analyze the DLM specified in Equations \@ref(eq:dlm-dlmSW1)--\@ref(eq:dlm-dlmSW3). We begin by loading the data set (which is in the **MARSS** package). The data set has 3 columns for 1) the year the salmon smolts migrated to the ocean (``year``), 2) logit-transformed survival \footnote{Survival in the original context was defined as the proportion of juveniles that survive to adulthood. Thus, we use the logit function, defined as $logit(p)=log_e(p/[1-p])$, to map survival from the open interval (0,1) onto the interval $(-\infty,\infty)$, which allows us to meet our assumption of normally distributed observation errors.} (``logit.s``), and 3) the coastal upwelling index for April (``CUI.apr``). There are 42 years of data (1964--2005).
```{r read.in.data, eval=TRUE}
## load the data
data(SalmonSurvCUI, package = "MARSS")
## get time indices
years <- SalmonSurvCUI[, 1]
## number of years of data
TT <- length(years)
## get response variable: logit(survival)
dat <- matrix(SalmonSurvCUI[,2], nrow = 1)
```
As we have seen in other case studies, standardizing our covariate(s) to have zero-mean and unit-variance can be helpful in model fitting and interpretation. In this case, it's a good idea because the variance of ``CUI.apr`` is orders of magnitude greater than ``logit.s``.
```{r z.score, eval=TRUE}
## get predictor variable
CUI <- SalmonSurvCUI[,3]
## z-score the CUI
CUI_z <- matrix((CUI - mean(CUI)) / sqrt(var(CUI)), nrow = 1)
## number of regr params (slope + intercept)
m <- dim(CUI_z)[1] + 1
```
Plots of logit-transformed survival and the $z$-scored April upwelling index are shown in Figure \@ref(fig:dlm-plotdata).
(ref:plotdata) Time series of logit-transformed marine survival estimates for Snake River spring/summer Chinook salmon (top) and *z*-scores of the coastal upwelling index at 45N 125W (bottom). The *x*-axis indicates the year that the salmon smolts entered the ocean.
```{r dlm-plotdata, eval=TRUE, echo=FALSE, fig=TRUE, fig.height=4, fig.width=6, fig.cap='(ref:plotdata)'}
par(mfrow = c(m, 1), mar = c(4, 4, 0.1, 0), oma = c(0, 0, 2, 0.5))
plot(years, dat, xlab = "", ylab = "Logit(s)",
bty = "n", xaxt = "n", pch = 16, col = "darkgreen", type = "b")
plot(years, CUI_z, xlab = "", ylab = "CUI",
bty = "n", xaxt = "n", pch = 16, col = "blue", type = "b")
axis(1, at = seq(1965, 2005, 5))
mtext("Year of ocean entry", 1, line = 3)
```
Next, we need to set up the appropriate matrices and vectors for MARSS. Let's begin with those for the process equation because they are straightforward.
```{r univ.DLM.proc, eval=TRUE}
## for process eqn
B <- diag(m) ## 2x2; Identity
U <- matrix(0, nrow = m, ncol = 1) ## 2x1; both elements = 0
Q <- matrix(list(0), m, m) ## 2x2; all 0 for now
diag(Q) <- c("q.alpha", "q.beta") ## 2x2; diag = (q1,q2)
```
Defining the correct form for the observation model is a little more tricky, however, because of how we model the effect(s) of predictor variables. In a DLM, we need to use $\mathbf{Z}_t$ (instead of $\mathbf{d}_t$) as the matrix of predictor variables that affect $\mathbf{y}_t$, and we use $\mathbf{x}_t$ (instead of $\mathbf{D}_t$) as the regression parameters. Therefore, we need to set $\mathbf{Z}_t$ equal to an $n\times m\times T$ array, where $n$ is the number of response variables (= 1; $y_t$ is univariate), $m$ is the number of regression parameters (= intercept + slope = 2), and $T$ is the length of the time series (= 42).
```{r univ.DLM.obs, eval=TRUE}
## for observation eqn
Z <- array(NA, c(1, m, TT)) ## NxMxT; empty for now
Z[1,1,] <- rep(1, TT) ## Nx1; 1's for intercept
Z[1,2,] <- CUI_z ## Nx1; predictor variable
A <- matrix(0) ## 1x1; scalar = 0
R <- matrix("r") ## 1x1; scalar = r
```
Lastly, we need to define our lists of initial starting values and model matrices/vectors.
```{r univ.DLM.list, eval=TRUE}
## only need starting values for regr parameters
inits_list <- list(x0 = matrix(c(0, 0), nrow = m))
## list of model matrices & vectors
mod_list <- list(B = B, U = U, Q = Q, Z = Z, A = A, R = R)
```
And now we can fit our DLM with MARSS.
```{r univ.DLM.fit, eval=TRUE}
## fit univariate DLM
dlm_1 <- MARSS(dat, inits = inits_list, model = mod_list)
```
Notice that the MARSS output does not list any estimates of the regression parameters themselves. Why not? Remember that in a DLM the matrix of states $(\mathbf{x})$ contains the estimates of the regression parameters $(\boldsymbol{\theta})$. Therefore, we need to look in ```dlm_1$states``` for the MLEs of the regression parameters, and in ```dlm_1$states.se``` for their standard errors.
Time series of the estimated intercept and slope are shown in Figure \@ref(fig:dlm-plotdlm_1). It appears as though the intercept is much more dynamic than the slope, as indicated by a much larger estimate of process variance for the former (```Q.q1```). In fact, although the effect of April upwelling appears to be increasing over time, it doesn't really become important as a predictor variable until about 1990 when the approximate 95\% confidence interval for the slope no longer overlaps zero.
<!-- % plot regression parameters -->
(ref:plotdlm_1) Time series of estimated mean states (thick lines) for the intercept (top) and slope (bottom) parameters from the DLM specified by Equations \@ref(eq:dlm-dlmSW1)--\@ref(eq:dlm-dlmSW3). Thin lines denote the mean $\pm$ 2 standard deviations.
```{r dlm-plotdlm_1, eval=TRUE, echo=FALSE, fig=TRUE, fig.height=4, fig.width=6, fig.cap='(ref:plotdlm_1)'}
ylabs <- c(expression(alpha[t]), expression(beta[t]))
colr <- c("darkgreen","blue")
par(mfrow = c(m, 1), mar = c(4, 4, 0.1, 0), oma = c(0, 0, 2, 0.5))
for(i in 1:m) {
mn <- dlm_1$states[i,]
se <- dlm_1$states.se[i,]
plot(years, mn, xlab = "", ylab = ylabs[i], bty = "n", xaxt = "n", type = "n",
ylim = c(min(mn-2*se), max(mn+2*se)))
lines(years, rep(0,TT), lty = "dashed")
lines(years, mn, col = colr[i], lwd = 3)
lines(years, mn+2*se, col = colr[i])
lines(years, mn-2*se, col = colr[i])
}
axis(1, at = seq(1965, 2005, 5))
mtext("Year of ocean entry", 1, line = 3)
```
## Forecasting {#sec-dlm-forecasting-with-a-univariate-dlm}
Forecasting from a DLM involves two steps:
1. Get an estimate of the regression parameters at time $t$ from data up to time $t-1$. These are also called the one-step ahead forecast (or prediction) of the regression parameters.
2. Make a prediction of $y$ at time $t$ based on the predictor variables at time $t$ and the estimate of the regression parameters at time $t$ (step 1). This is also called the one-step ahead forecast (or prediction) of the observation.
### Estimate of the regression parameters
For step 1, we want to compute the distribution of the regression parameters at time $t$ conditioned on the data up to time $t-1$, also known as the one-step ahead forecasts of the regression parameters. Let's denote $\boldsymbol{\theta}_{t-1}$ conditioned on $y_{1:t-1}$ as $\boldsymbol{\theta}_{t-1|t-1}$ and denote $\boldsymbol{\theta}_{t}$ conditioned on $y_{1:t-1}$ as $\boldsymbol{\theta}_{t|t-1}$. We will start by defining the distribution of $\boldsymbol{\theta}_{t|t}$ as follows
\begin{equation}
(\#eq:dlm-distthetatt)
\boldsymbol{\theta}_{t|t} \sim \text{MVN}(\boldsymbol{\pi}_t,\boldsymbol{\Lambda}_t) \end{equation}
where $\boldsymbol{\pi}_t = \text{E}(\boldsymbol{\theta}_{t|t})$ and $\mathbf{\Lambda}_t = \text{Var}(\boldsymbol{\theta}_{t|t})$.
Now we can compute the distribution of $\boldsymbol{\theta}_{t}$ conditioned on $y_{1:t-1}$ using the process equation for $\boldsymbol{\theta}$:
\begin{equation}
\boldsymbol{\theta}_{t} = \mathbf{G}_t \boldsymbol{\theta}_{t-1} + \mathbf{w}_t ~ \text{with} ~ \mathbf{w}_t \sim \text{MVN}(\mathbf{0}, \mathbf{Q}) \\
\end{equation}
The expected value of $\boldsymbol{\theta}_{t|t-1}$ is thus
\begin{equation}
(\#eq:dlm-distthetatt-mean)
\text{E}(\boldsymbol{\theta}_{t|t-1}) = \mathbf{G}_t \text{E}(\boldsymbol{\theta}_{t-1|t-1}) = \mathbf{G}_t \boldsymbol{\pi}_{t-1}
\end{equation}
The variance of $\boldsymbol{\theta}_{t|t-1}$ is
\begin{equation}
(\#eq:dlm-distthetatt-var)
\text{Var}(\boldsymbol{\theta}_{t|t-1}) = \mathbf{G}_t \text{Var}(\boldsymbol{\theta}_{t-1|t-1}) \mathbf{G}_t^{\top} + \mathbf{Q} = \mathbf{G}_t \mathbf{\Lambda}_{t-1} \mathbf{G}_t^{\top} + \mathbf{Q}
\end{equation}
Thus the distribution of $\boldsymbol{\theta}_{t}$ conditioned on $y_{1:t-1}$ is
\begin{equation}
(\#eq:dlm-distthetatplus1-t)
\text{E}(\boldsymbol{\theta}_{t|t-1}) \sim \text{MVN}(\mathbf{G}_t \boldsymbol{\pi}_{t-1}, \mathbf{G}_t \mathbf{\Lambda}_{t-1} \mathbf{G}_t^{\top} + \mathbf{Q})
\end{equation}
### Prediction of the response variable $y_t$
For step 2, we make the prediction of $y_{t}$ given the predictor variables at time $t$ and the estimate of the regression parameters at time $t$. This is called the one-step ahead prediction for the observation at time $t$. We will denote the prediction of $y$ as $\hat{y}$ and we want to compute its distribution (mean and variance). We do this using the equation for $y_t$ but substituting the expected value of $\boldsymbol{\theta}_{t|t-1}$ for $\boldsymbol{\theta}_t$.
\begin{equation}
(\#eq:dlm-predict-y-tplus1)
\hat{y}_{t|t-1} = \mathbf{F}^{\top}_{t} \text{E}(\boldsymbol{\theta}_{t|t-1}) + e_{t} ~ \text{with} ~ e_{t} \sim \text{N}(0, r) \\
\end{equation}
Our prediction of $y$ at $t$ has a normal distribution with mean (expected value) and variance. The expected value of $\hat{y}_{t|t-1}$ is
\begin{equation}
(\#eq:dlm-predict-y-mean)
\text{E}(\hat{y}_{t|t-1}) = \mathbf{F}^{\top}_{t} \text{E}(\boldsymbol{\theta}_{t|t-1}) = \mathbf{F}^{\top}_{t} (\mathbf{G}_t \boldsymbol{\pi}_{t-1}) \\
\end{equation}
and the variance of $\hat{y}_{t|t-1}$ is
\begin{align}
(\#eq:dlm-predict-y-var)
\text{Var}(\hat{y}_{t|t-1}) &= \mathbf{F}^{\top}_{t} \text{Var}(\boldsymbol{\theta}_{t|t-1}) \mathbf{F}_{t} + r \\
&= \mathbf{F}^{\top}_{t} (\mathbf{G}_t \mathbf{\Lambda}_{t-1} \mathbf{G}_t^{\top} + \mathbf{Q}) \mathbf{F}_{t} + r \\
\end{align}
### Computing the prediction
The expectations and variance of $\boldsymbol{\theta}_t$ conditioned on $y_{1:t}$ and $y_{1:t-1}$ are standard output from the Kalman filter. Thus to produce the predictions, all we need to do is run our DLM state-space model through a Kalman filter to get $\text{E}(\boldsymbol{\theta}_{t|t-1})$ and $\text{Var}(\boldsymbol{\theta}_{t|t-1})$ and then use Equation \@ref(eq:dlm-predict-y-mean) to compute the mean prediction and Equation \@ref(eq:dlm-predict-y-var) to compute its variance.
The Kalman filter will need $\mathbf{F}_t$, $\mathbf{G}_t$ and estimates of $\mathbf{Q}$ and $r$. The latter are calculated by fitting the DLM to the data $y_{1:t}$, using for example the `MARSS()` function.
Let's see an example with the salmon survival DLM. We will use the Kalman filter function in the **MARSS** package and the DLM fit from `MARSS()`.
### Forecasting salmon survival
@ScheuerellWilliams2005 were interested in how well upwelling could be used to actually _forecast_ expected survival of salmon, so let's look at how well our model does in that context. To do so, we need the predictive distribution for the survival at time $t$ given the upwelling at time $t$ and the predicted regression parameters at $t$.
In the salmon survival DLM, the $\mathbf{G}_t$ matrix is the identity matrix, thus the mean and variance of the one-step ahead predictive distribution for the observation at time $t$ reduces to (from Equations \@ref(eq:dlm-predict-y-mean) and \@ref(eq:dlm-predict-y-var))
\begin{equation}
(\#eq:dlm-dlmFore3)
\text{E}(\hat{y}_{t|t-1}) = \mathbf{F}^{\top}_{t} \text{E}(\boldsymbol{\theta}_{t|t-1}) \\
\text{Var}(\hat{y}_{t|t-1}) = \mathbf{F}^{\top}_{t} \text{Var}(\boldsymbol{\theta}_{t|t-1}) \mathbf{F}_{t} + \hat{r}
\end{equation}
where
$$
\mathbf{F}_{t}=\begin{bmatrix}1 \\ f_{t}\end{bmatrix}
$$
and $f_{t}$ is the upwelling index at $t+1$. $\hat{r}$ is the estimated observation variance from our model fit.
### Forecasting using MARSS {#sec-dlm-forecasting-a-univariate-dlm-with-marss}
Working from Equation \@ref(eq:dlm-dlmFore3), we can compute the expected value of the forecast at time $t$ and its variance using the Kalman filter. For the expectation, we need $\mathbf{F}_{t}^\top\text{E}(\boldsymbol{\theta}_{t|t-1})$.
$\mathbf{F}_t^\top$ is called $\mathbf{Z}_t$ in MARSS notation. The one-step ahead forecasts of the regression parameters at time $t$, the $\text{E}(\boldsymbol{\theta}_{t|t-1})$, are calculated as part of the Kalman filter algorithm---they are termed $\tilde{x}_t^{t-1}$ in MARSS notation and stored as ``xtt1`` in the list produced by the ``MARSSkfss()`` Kalman filter function.
Using the `Z` defined in \@ref(sec-dlm-salmon-example), we compute the mean forecast as follows:
```{r univ.DLM.fore_mean, eval=TRUE}
## get list of Kalman filter output
kf_out <- MARSSkfss(dlm_1)
## forecasts of regr parameters; 2xT matrix
eta <- kf_out$xtt1
## ts of E(forecasts)
fore_mean <- vector()
for(t in 1:TT) {
fore_mean[t] <- Z[,,t] %*% eta[, t, drop = FALSE]
}
```
For the variance of the forecasts, we need
$\mathbf{F}^{\top}_{t} \text{Var}(\boldsymbol{\theta}_{t|t-1}) \mathbf{F}_{t} + \hat{r}$. As with the mean, $\mathbf{F}^\top_t \equiv \mathbf{Z}_t$. The variances of the one-step ahead forecasts of the regression parameters at time $t$, $\text{Var}(\boldsymbol{\theta}_{t|t-1})$, are also calculated as part of the Kalman filter algorithm---they are stored as ```Vtt1``` in the list produced by the ```MARSSkfss()``` function. Lastly, the observation variance $\hat{r}$ was estimated when we fit the DLM to the data using `MARSS()` and can be extracted from the `dlm_1` fit.
Putting this together, we can compute the forecast variance:
```{r univ.DLM.fore_var, eval=TRUE}
## variance of regr parameters; 1x2xT array
Phi <- kf_out$Vtt1
## obs variance; 1x1 matrix
R_est <- coef(dlm_1, type="matrix")$R
## ts of Var(forecasts)
fore_var <- vector()
for(t in 1:TT) {
tZ <- matrix(Z[, , t], m, 1) ## transpose of Z
fore_var[t] <- Z[, , t] %*% Phi[, , t] %*% tZ + R_est
}
```
Plots of the model mean forecasts with their estimated uncertainty are shown in Figure \@ref(fig:dlm-plotdlmForeLogit). Nearly all of the observed values fell within the approximate prediction interval. Notice that we have a forecasted value for the first year of the time series (1964), which may seem at odds with our notion of forecasting at time $t$ based on data available only through time $t-1$. In this case, however, MARSS is actually estimating the states at $t=0$ ($\boldsymbol{\theta}_0$), which allows us to compute a forecast for the first time point.
<!-- % forecast plot - logit space -->
(ref:plotdlmForeLogit) Time series of logit-transformed survival data (blue dots) and model mean forecasts (thick line). Thin lines denote the approximate 95\% prediction intervals.
```{r dlm-plotdlmForeLogit, eval=TRUE, echo=FALSE, fig=TRUE, fig.height=3, fig.width=6, fig.cap='(ref:plotdlmForeLogit)'}
par(mar = c(4, 4, 0.1, 0), oma = c(0, 0, 2, 0.5))
ylims=c(min(fore_mean - 2*sqrt(fore_var)), max(fore_mean+2*sqrt(fore_var)))
plot(years, t(dat), type = "p", pch = 16, ylim = ylims,
col = "blue", xlab = "", ylab = "Logit(s)", xaxt = "n")
lines(years, fore_mean, type = "l", xaxt = "n", ylab = "", lwd = 3)
lines(years, fore_mean+2*sqrt(fore_var))
lines(years, fore_mean-2*sqrt(fore_var))
axis(1, at = seq(1965, 2005, 5))
mtext("Year of ocean entry", 1, line = 3)
```
Although our model forecasts look reasonable in logit-space, it is worthwhile to examine how well they look when the survival data and forecasts are back-transformed onto the interval [0,1] (Figure \@ref(fig:dlm-plotdlmForeRaw)). In that case, the accuracy does not seem to be affected, but the precision appears much worse, especially during the early and late portions of the time series when survival is changing rapidly.
<!-- % forecast plot - normal space -->
(ref:plotdlmForeRaw) Time series of survival data (blue dots) and model mean forecasts (thick line). Thin lines denote the approximate 95\% prediction intervals.
```{r dlm-plotdlmForeRaw, eval=TRUE, echo=FALSE, fig=TRUE, fig.height=3, fig.width=6, fig.cap='(ref:plotdlmForeRaw)'}
invLogit <- function(x) {
1 / ( 1 + exp(-x))
}
ff <- invLogit(fore_mean)
fup <- invLogit(fore_mean+2*sqrt(fore_var))
flo <- invLogit(fore_mean-2*sqrt(fore_var))
par(mar = c(4, 4, 0.1, 0), oma = c(0, 0, 2, 0.5))
ylims <- c(min(flo), max(fup))
plot(years, invLogit(t(dat)), type = "p", pch = 16, ylim = ylims,
col = "blue", xlab = "", ylab = "Survival", xaxt = "n")
lines(years, ff, type = "l", xaxt = "n", ylab = "", lwd = 3)
lines(years, fup)
lines(years, flo)
axis(1, at = seq(1965, 2005, 5))
mtext("Year of ocean entry", 1, line = 3)
```
Notice that we passed the DLM fit to all the data to `MARSSkfss()`. This meant that the Kalman filter used estimates of $\mathbf{Q}$ and $r$ using all the data in the `xtt1` and `Vtt1` calculations. Thus our predictions at time $t$ are not entirely based on only data up to time $t-1$ since the $\mathbf{Q}$ and $r$ estimates were from all the data from 1964 to 2005.
## Forecast diagnostics {#sec-dlm-dlm-forecast-diagnostics}
\begin{samepage}
As with other time series models, evaluation of a DLM should include diagnostics. In a forecasting context, we are often interested in the forecast errors, which are simply the observed data minus the forecasts $e_t = y_t - \text{E}(y_t|y_{1:t-1})$. In particular, the following assumptions should hold true for $e_t$:
1. $e_t \sim \N(0,\sigma^2)$;
2. $\cov(e_t,e_{t-k}) = 0$.
\end{samepage}
In the literature on state-space models, the set of $e_t$ are commonly referred to as "innovations". ```MARSS()``` calculates the innovations as part of the Kalman filter algorithm---they are stored as ```Innov``` in the list produced by the ```MARSSkfss()``` function.
```{r dlmInnov, eval=TRUE, echo=TRUE}
## forecast errors
innov <- kf_out$Innov
```
Let's see if our innovations meet the model assumptions. Beginning with (1), we can use a Q-Q plot to see whether the innovations are normally distributed with a mean of zero. We'll use the ```qqnorm()``` function to plot the quantiles of the innovations on the $y$-axis versus the theoretical quantiles from a Normal distribution on the $x$-axis. If the 2 distributions are similar, the points should fall on the line defined by $y = x$.
```{r dlmQQplot, eval=FALSE, echo=TRUE}
## Q-Q plot of innovations
qqnorm(t(innov), main = "", pch = 16, col = "blue")
## add y=x line for easier interpretation
qqline(t(innov))
```
<!-- % diagnostics plot: QQ -->
(ref:plotdlmQQ) Q-Q plot of the forecast errors (innovations) for the DLM specified in Equations \@ref(eq:dlm-dlmSW1)--\@ref(eq:dlm-dlmSW3).
```{r dlm-plotdlmQQ, eval=TRUE, echo=FALSE, fig=TRUE, fig.height=2, fig.width=4, fig.cap='(ref:plotdlmQQ)'}
## use layout to get nicer plots
layout(matrix(c(0,1,1,1,0),1,5,byrow=TRUE))
## set up L plotting space
par(mar=c(4,4,1,0), oma=c(0,0,0,0.5))
## Q-Q plot of innovations
qqnorm(t(innov), main="", pch=16, col="blue")
qqline(t(innov))
```
The Q-Q plot (Figure \@ref(fig:dlm-plotdlmQQ)) indicates that the innovations appear to be more-or-less normally distributed (i.e., most points fall on the line). Furthermore, it looks like the mean of the innovations is about 0, but we should use a more reliable test than simple visual inspection. We can formally test whether the mean of the innovations is significantly different from 0 by using a one-sample $t$-test. based on a null hypothesis of $\E(e_t)=0$. To do so, we will use the function ```t.test()``` and base our inference on a significance value of $\alpha = 0.05$.
```{r dlmInnovTtest, eval=TRUE, echo=TRUE}
## p-value for t-test of H0: E(innov) = 0
t.test(t(innov), mu = 0)$p.value
```
The $p$-value $>>$ 0.05 so we cannot reject the null hypothesis that $\E(e_t)=0$.
Moving on to assumption (2), we can use the sample autocorrelation function (ACF) to examine whether the innovations covary with a time-lagged version of themselves. Using the ```acf()``` function, we can compute and plot the correlations of $e_t$ and $e_{t-k}$ for various values of $k$. Assumption (2) will be met if none of the correlation coefficients exceed the 95\% confidence intervals defined by $\pm \, z_{0.975} / \sqrt{n}$.
```{r dlmACFplot, eval=FALSE, echo=TRUE}
## plot ACF of innovations
acf(t(innov), lag.max = 10)
```
<!-- % diagnostics plot: ACF -->
(ref:plotdlmACF) Autocorrelation plot of the forecast errors (innovations) for the DLM specified in Equations \@ref(eq:dlm-dlmSW1)--\@ref(eq:dlm-dlmSW3). Horizontal blue lines define the upper and lower 95\% confidence intervals.
```{r dlm-plotdlmACF, eval=TRUE, echo=FALSE, fig=TRUE, fig.height=2, fig.width=4, fig.cap='(ref:plotdlmACF)'}
## use layout to get nicer plots
layout(matrix(c(0, 1, 1, 1, 0), 1, 5, byrow = TRUE))
## set up plotting space
par(mar = c(4, 4, 1, 0), oma = c(0, 0, 0, 0.5))
## ACF of innovations
acf(t(innov), lwd = 2, lag.max = 10)
```
The ACF plot (Figure \@ref(fig:dlm-plotdlmACF)) shows no significant autocorrelation in the innovations at lags 1--10, so it looks like both of our model assumptions have indeed been met.
\newpage
## Homework discussion and data {#sec-dlm-homework}
For the homework this week we will use a DLM to examine some of the time-varying properties of the spawner-recruit relationship for Pacific salmon. Much work has been done on this topic, particularly by Randall Peterman and his students and post-docs at Simon Fraser University. To do so, researchers commonly use a Ricker model because of its relatively simple form, such that the number of recruits (offspring) born in year $t$ ($R_t$) from the number of spawners (parents) ($S_t$) is
\begin{equation}
(\#eq:dlm-baseRicker)
R_t = a S_t e^{-b S + v_t}.
\end{equation}
\noindent The parameter $a$ determines the maximum reproductive rate in the absence of any density-dependent effects (the slope of the curve at the origin), $b$ is the strength of density dependence, and $v_t \sim N(0,\sigma)$. In practice, the model is typically log-transformed so as to make it linear with respect to the predictor variable $S_t$, such that
\begin{align}
(\#eq:dlm-lnRicker)
\text{log}(R_t) &= \text{log}(a) + \text{log}(S_t) -b S_t + v_t \\
\text{log}(R_t) - \text{log}(S_t) &= \text{log}(a) -b S_t + v_t \\
\text{log}(R_t/S_t) &= \text{log}(a) - b S_t + v_t.
\end{align}
\noindent Substituting $y_t = \text{log}(R_t/S_t)$, $x_t = S_t$, and $\alpha = \text{log}(a)$ yields a simple linear regression model with intercept $\alpha$ and slope $b$.
Unfortunately, however, residuals from this simple model typically show high-autocorrelation due to common environmental conditions that affect overlapping generations. Therefore, to correct for this and allow for an index of stock productivity that controls for any density-dependent effects, the model may be re-written as
\begin{align}
(\#eq:dlm-lnTVRicker)
\text{log}(R_t/S_t) &= \alpha_t - b S_t + v_t, \\
\alpha_t &= \alpha_{t-1} + w_t,
\end{align}
\noindent and $w_t \sim N(0,q)$. By treating the brood-year specific productivity as a random walk, we allow it to vary, but in an autocorrelated manner so that consecutive years are not independent from one another.
More recently, interest has grown in using covariates ($e.g.$, sea-surface temperature) to explain the interannual variability in productivity. In that case, we can can write the model as
\begin{equation}
(\#eq:dlm-lnCovRicker)
\text{log}(R_t/S_t) = \alpha + \delta_t X_t - b S_t + v_t.
\end{equation}
\noindent In this case we are estimating some base-level productivity ($\alpha$) plus the time-varying effect of some covariate $X_t$ ($\delta_t$).
### Spawner-recruit data {#sec-dlm-spawner-recruit-data}
The data come from a large public database begun by Ransom Myers many years ago. If you are interested, you can find lots of time series of spawning-stock, recruitment, and harvest for a variety of fishes around the globe. Here is the website:
\newline
https://www.ramlegacy.org/
For this exercise, we will use spawner-recruit data for sockeye salmon (_Oncorhynchus nerka_) from the Kvichak River in SW Alaska that span the years 1952-1989. In addition, we'll examine the potential effects of the Pacific Decadal Oscillation (PDO) during the salmon's first year in the ocean, which is widely believed to be a "bottleneck" to survival.
```{r echo=FALSE}
knitr::include_graphics("images/BB_sockeye_rivers_inset.png")
# ![](images/BB_sockeye_rivers_inset.png)
```
These data are in the **atsalibrary** package on GitHub. If needed, install using the **devtools** package.
```{r dlm-load-atsa, eval=FALSE}
library(devtools)
## Windows users will likely need to set this
## Sys.setenv("R_REMOTES_NO_ERRORS_FROM_WARNINGS" = "true")
devtools::install_github("nwfsc-timeseries/atsalibrary")
```
Load the data.
```{r dlm-read-data}
data(KvichakSockeye, package="atsalibrary")
SR_data <- KvichakSockeye
```
The data are a dataframe with columns for brood year (`brood_year`), number of spawners (`spawners`), number of recruits (`recruits`) and PDO at year $t-2$ in summer (`pdo_summer_t2`) and in winter (`pdo_winter_t2`).
```{r dlm-data-head}
## head of data file
head(SR_data)
```
\clearpage
## Problems {#sec-dlm-problems}
\noindent Use the information and data in the previous section to answer the following questions. Note that if any model is not converging, then you will need to increase the `maxit` parameter in the `control` argument/list that gets passed to `MARSS()`. For example, you might try `control=list(maxit=2000)`.
1. Begin by fitting a reduced form of Equation \@ref(eq:dlm-lnTVRicker) that includes only a time-varying level ($\alpha_t$) and observation error ($v_t$). That is,
\begin{align*}
\text{log}(R_t) &= \alpha_t + \text{log}(S_t) + v_t \\
\text{log}(R_t/S_t) &= \alpha_t + v_t
\end{align*}
This model assumes no density-dependent survival in that the number of recruits is an ascending function of spawners. Plot the ts of $\alpha_t$ and note the AICc for this model. Also plot appropriate model diagnostics. `residuals()` will return the innovations residuals for your fits.
2. Fit the full model specified by Equation \@ref(eq:dlm-lnTVRicker). For this model, obtain the time series of $\alpha_t$, which is an estimate of the stock productivity in the absence of density-dependent effects. How do these estimates of productivity compare to those from the previous question? Plot the ts of $\alpha_t$ and note the AICc for this model. Also plot appropriate model diagnostics. ($Hint$: If you don't want a parameter to vary with time, what does that say about its process variance?)
3. Fit the model specified by Equation \@ref(eq:dlm-lnCovRicker) with the summer PDO index as the covariate (`pdo_summer_t2`). What is the mean level of productivity? Plot the ts of $\delta_t$ and note the AICc for this model. Also plot appropriate model diagnostics.
4. Fit the model specified by Equation \@ref(eq:dlm-lnCovRicker) with the winter PDO index as the covariate (`pdo_winter_t2`). What is the mean level of productivity? Plot the ts of $\delta_t$ and note the AICc for this model. Also plot appropriate model diagnostics.
5. Based on AICc, which of the models above is the most parsimonious? Is it well behaved ($i.e.$, are the model assumptions met)? Plot the model forecasts for the best model. Is this a good forecast model?