Dynamic Predictions

Theory

Based on the general framework of joint models presented earlier, we are interested in deriving cumulative risk probabilities for a new subject \(j\) that has survived up to time point \(t\) and has provided longitudinal measurements \(\mathcal Y_{kj}(t) = \{ y_{kj}(t_{jl}); 0 \leq t_{jl} \leq t, l = 1, \ldots, n_j, k = 1, \ldots, K\}\), with \(K\) denoting the number of longitudinal outcomes. The probabilities of interest are \[\begin{array}{l} \pi_j(u \mid t) = \mbox{Pr}\{T_j^* \leq u \mid T_j^* > t, \mathcal Y_j(t), \mathcal D_n\}\\\\ = \displaystyle 1 - \int\int \frac{S(u \mid b_j, \theta)}{S(t \mid b_j, \theta)} \; p\{b_j \mid T_j^* > t, \mathcal Y_j(t), \theta\} \; p(\theta \mid \mathcal D_n) \; db_j d\theta, \end{array}\] where \(S(\cdot)\) denotes the survival function conditional on the random effects, and \(\mathcal Y_j(t) = \{\mathcal Y_{1j}(t), \ldots, \mathcal Y_{Kj}(t)\}\). Combining the three terms in the integrand we can device a Monte Carlo scheme to obtain estimates of these probabilities, namely,

  1. Sample a value \(\tilde \theta\) from the posterior of the parameters \([\theta \mid \mathcal D_n]\).

  2. Sample a value \(\tilde b_j\) from the posterior of the random effects \([b_j \mid T_j^* > t, \mathcal Y_j(t), \tilde \theta]\).

  3. Compute the ratio of survival probabilities \(S(u \mid \tilde b_j, \tilde \theta) \Big / S(t \mid \tilde b_j, \tilde \theta)\).

Replicating these steps \(L\) times, we can estimate the conditional cumulative risk probabilities by \[1 - \frac{1}{L} \sum_{l=1}^L \frac{S(u \mid \tilde b_j^{(l)}, \tilde \theta^{(l)})}{S(t \mid \tilde b_j^{(l)}, \tilde \theta^{(l)})},\] and their standard error by calculating the standard deviation across the Monte Carlo samples.

Example

We will illustrate the calculation of dynamic predictions using package JMbayes2 from a trivariate joint model fitted to the PBC dataset for the longitudinal outcomes serBilir (continuous), prothrombin time (continuous) and ascites (dichotomous). We start by fitting the univariate mixed models. For the two continuous outcomes, we allow for nonlinear subject-specific time effects using natural cubic splines. For ascites, we postulate linear subject-specific profiles for the log odds. The code is:

fm1 <- lme(log(serBilir) ~ ns(year, 3) * sex, data = pbc2,
           random = ~ ns(year, 3) | id, control = lmeControl(opt = 'optim'))

fm2 <- lme(prothrombin ~ ns(year, 2) * sex, data = pbc2,
           random = ~ ns(year, 2) | id, control = lmeControl(opt = 'optim'))

fm3 <- mixed_model(ascites ~ year * sex, data = pbc2,
                   random = ~ year | id, family = binomial())

Following, we fit the Cox model for the time to either transplantation or death. The first line defines the composite event indicator, and the second one fits the Cox model in which we have also included the baseline covariates drug and age. The code is:

pbc2.id$event <- as.numeric(pbc2.id$status != "alive")
CoxFit <- coxph(Surv(years, event) ~ drug + age, data = pbc2.id)

The joint model is fitted with the following call to jm():

jointFit <- jm(CoxFit, list(fm1, fm2, fm3), time_var = "year")

We want to calculate predictions for the longitudinal and survival outcomes for Patients 25 and 93. As a first step, we extract the data of these patients and store them in the data.frame ND with the code:

t0 <- 5
ND <- pbc2[pbc2$id %in% c(25, 93), ]
ND <- ND[ND$year < t0, ]
ND$status2 <- 0
ND$years <- t0

We will only use the first five years of follow-up (line three), and further we specify that the patients were event-free up to this time point (lines four and five).

We start with predictions for the longitudinal outcomes. These are produced by the predict() method for class jm objects, and follow the same lines as the procedure described above for cumulative risk probabilities. The only difference is in Step 3, where instead of calculating the cumulative risk we calculate the predicted values for the longitudinal outcomes. There are two options controlled by the type_pred argument, namely predictions at the scale of the response/outcome (default) or at the linear predictor level. The type argument controls if the predictions will be for the mean subject (i.e., including only the fixed effects) or subject-specific including both the fixed and random effects. In the newdata argument we provide the available measurements of the two patients. This will be used to sample their random effects at Step 2 presented above. This is done with a Metropolis-Hastings algorithm that runs for n_mcmc iterations; all iterations but the last one are discarded as burn-in. Finally, argument n_samples corresponds to the value of \(L\) defined above and specifies the number of Monte Carlo samples:

predLong1 <- predict(jointFit, newdata = ND, return_newdata = TRUE)

Argument return_newdata specifies that the predictions are returned as extra columns of the newdata data.frame. By default the 95% credible intervals are also included. Using the plot() method for objects returned by predict.jm(..., return_newdata = TRUE), we can display the predictions. With the following code we do that for the first longitudinal outcome:

plot(predLong1)

When we want to calculate predictions for other, future time points, we can accordingly specify the times argument. In the following example, we calculate predictions from time t0 to time 12:

predLong2 <- predict(jointFit, newdata = ND,
                     times = seq(t0, 12, length.out = 51),
                     return_newdata = TRUE)

We show these predictions for the second outcome and the second patient (i.e., Patient 93). This is achieved by suitably specifying the outcomes and subject arguments of the plot() method:

plot(predLong2, outcomes = 2, subject = 93)

We continue with the predictions for the event outcome. To let predict() know that we want the cumulative risk probabilities, we specify process = "event":

predSurv <- predict(jointFit, newdata = ND, process = "event",
                    times = seq(t0, 12, length.out = 51),
                    return_newdata = TRUE)

The predictions are included again as extra columns in the corresponding data.frame. To depict the predictions of both the longitudinal and survival outcomes combined, we provide both objects to the plot() method:

plot(predLong2, predSurv)

Again by default, the plot is for the predictions of the first subject (i.e., Patient 25) and for the first longitudinal outcome (i.e., log(serBilir)). However, the plot() method has a series of arguments that allows users to customize the plot. We illustrate some of these capabilities with the following figure. First, we specify that we want to depict all three outcomes using outcomes = 1:3 (note: a max of three outcomes can be simultaneously displayed). Next, we specify via the subject argument that we want to show the predictions of Patient 93. Note, that for serum bilirubin we used the log transformation in the specification of the linear mixed model. Hence, we receive predictions on the transformed scale. To show predictions on the original scale, we use the fun_long argument. Because we have three outcomes, this needs to be a list of three functions. The first one, corresponding to serum bilirubin is the exp() and for the other two the identity() because we do not wish to transform the predictions. Analogously, we also have the fun_event argument to transform the predictions for the event outcome, and in the example below we set that we want to obtain survival probabilities. Using the arguments bg, col_points, col_line_long, col_line_event, fill_CI_long, and fill_CI_event we have changed the appearance of the plot to a dark theme. Finally, the pos_ylab_long specifies the relative positive of the y-axis labels for the three longitudinal outcomes.

cols <- c('#F25C78', '#D973B5', '#F28322')
plot(predLong2, predSurv, outcomes = 1:3, subject = 93,
     fun_long = list(exp, identity, identity),
     fun_event = function (x) 1 - x,
     ylab_event = "Survival Probabilities",
     ylab_long = c("Serum Bilirubin", "Prothrombin", "Ascites"),
     bg = '#132743', col_points = cols, col_line_long = cols,
     col_line_event = '#F7F7FF', col_axis = "white", 
     fill_CI_long = c("#F25C7880", "#D973B580", "#F2832280"),
     fill_CI_event = "#F7F7FF80",
     pos_ylab_long = c(1.9, 1.9, 0.08))

Predictive accuracy

We evaluate the discriminative capability of the model using ROC methodology. We calculate the components of the ROC curve using information up to year five, and we are interested in events occurring within a three-year window. That is discriminating between patients who will get the event in the interval (t0, t0 + Dt], (i.e., in our case \(T_j \in (5, 8]\)) from patients who will survive at least 8 years (i.e., \(T_j > 8\)). The calculations are performed with the following call to tvROC():

pbc2$event <- as.numeric(pbc2$status != "alive")
roc <- tvROC(jointFit, newdata = pbc2, Tstart = t0, Dt = 3)
roc
#> 
#>  Time-dependent Sensitivity and Specificity for the Joint Model jointFit
#> 
#> At time: 8
#> Using information up to time: 5 (202 subjects still at risk)
#> 
#>    cut-off         SN         SP        qSN         qSP  
#> 1     0.05 0.02127789 1.00000000 0.01640862 1.000000000  
#> 2     0.07 0.06383366 1.00000000 0.04972059 1.000000000  
#> 3     0.08 0.08511154 1.00000000 0.06662895 1.000000000  
#> 4     0.09 0.10638943 1.00000000 0.08370895 1.000000000  
#> 5     0.10 0.12766731 1.00000000 0.10096325 1.000000000  
#> 6     0.11 0.14894520 1.00000000 0.11839451 1.000000000  
#> 7     0.12 0.14894520 0.99354851 0.11385015 0.837099767  
#> 8     0.16 0.17881374 0.98970172 0.13604362 0.791974645  
#> 9     0.19 0.20009162 0.98970172 0.15402360 0.810886041  
#> 10    0.20 0.22136951 0.98970172 0.17219285 0.826645537  
#> 11    0.21 0.23729845 0.98807991 0.18483750 0.814780085  
#> 12    0.23 0.25857634 0.98807991 0.20336394 0.828010078  
#> 13    0.26 0.27985422 0.98807991 0.22208852 0.839476073  
#> 14    0.28 0.32240999 0.98162841 0.25612402 0.793829958  
#> 15    0.31 0.32815646 0.97691926 0.25840222 0.754615269  
#> 16    0.34 0.34943435 0.96401627 0.26992077 0.669603927  
#> 17    0.36 0.37071223 0.96401627 0.28985403 0.683968974  
#> 18    0.43 0.37594851 0.95270093 0.28780565 0.617823516  
#> 19    0.45 0.39722640 0.95270093 0.30818030 0.632522612  
#> 20    0.47 0.41850428 0.95270093 0.32878780 0.646132885  
#> 21    0.48 0.42885604 0.94938811 0.33694781 0.634871342  
#> 22    0.49 0.45013392 0.94938811 0.35795984 0.647461985  
#> 23    0.50 0.47141181 0.94938811 0.37921619 0.659213252  
#> 24    0.51 0.47197345 0.93665541 0.37258022 0.600137276  
#> 25    0.52 0.49325134 0.93665541 0.39430042 0.612254328  
#> 26    0.53 0.50248356 0.93300314 0.40179571 0.601959841  
#> 27    0.55 0.50248356 0.92655165 0.39821364 0.576098078  
#> 28    0.56 0.52376144 0.92655165 0.42048079 0.587873131  
#> 29    0.58 0.54503933 0.92655165 0.44301784 0.599011695  
#> 30    0.59 0.54503933 0.92010015 0.43962161 0.575269233  
#> 31    0.60 0.55429802 0.91645591 0.44765767 0.567284460  
#> 32    0.63 0.59685379 0.91645591 0.49418922 0.588392535  
#> 33    0.64 0.61813167 0.91000442 0.51485911 0.577230048 *
#> 34    0.65 0.61813167 0.90355292 0.51178859 0.557220232  
#> 35    0.66 0.62426896 0.89896226 0.51657535 0.546452830  
#> 36    0.68 0.63153062 0.87535804 0.51352409 0.486169861  
#> 37    0.70 0.63464851 0.86985189 0.51446710 0.474201639  
#> 38    0.71 0.63489623 0.86347551 0.51158304 0.459255927  
#> 39    0.72 0.63489623 0.85057252 0.50502710 0.430483935  
#> 40    0.73 0.63489623 0.83766953 0.49829278 0.403804452  
#> 41    0.74 0.65617412 0.83766953 0.52429569 0.414450801  
#> 42    0.75 0.65617412 0.83121803 0.52101497 0.401860403  
#> 43    0.76 0.66044502 0.80670700 0.51354535 0.359914996  
#> 44    0.77 0.66044502 0.79380402 0.50654600 0.338863667  
#> 45    0.78 0.68172291 0.78735252 0.53071553 0.339157064  
#> 46    0.79 0.70372245 0.78111983 0.55668100 0.340092637  
#> 47    0.80 0.71443811 0.75211137 0.55628075 0.304534678  
#> 48    0.82 0.72186315 0.72855668 0.55409806 0.278532225  
#> 49    0.83 0.72596205 0.72334798 0.55715468 0.274237557  
#> 50    0.84 0.75096099 0.70512173 0.58424893 0.264624555  
#> 51    0.85 0.75157102 0.67950071 0.57108843 0.238342867  
#> 52    0.86 0.75645701 0.64872468 0.56075282 0.211582062  
#> 53    0.87 0.80384342 0.62438336 0.62619217 0.209639991  
#> 54    0.88 0.80709908 0.60601600 0.62168946 0.196113459  
#> 55    0.89 0.81160613 0.55577059 0.59941514 0.161361304  
#> 56    0.90 0.83451041 0.54336073 0.63664243 0.161444251  
#> 57    0.91 0.86095379 0.50621795 0.66956076 0.147487401  
#> 58    0.92 0.88223167 0.46105749 0.69104932 0.129068903  
#> 59    0.93 0.88400014 0.42933622 0.67455595 0.113276282  
#> 60    0.94 0.88689532 0.37860209 0.64301335 0.090417552  
#> 61    0.95 0.93071264 0.30156666 0.72007905 0.071818851  
#> 62    0.96 0.93255510 0.21180438 0.62155915 0.040870386  
#> 63    0.97 0.97711462 0.10273651 0.72806779 0.020285274  
#> 64    0.98 1.00000000 0.01935448 1.00000000 0.004570882

In the first line we define the event indicator as we did in the pbc2.id data.frame. The cut-point with the asterisk on the right maximizes the Youden’s index. To depict the ROC curve, we use the corresponding plot() method:

The area under the ROC curve is calculated with the tvAUC() function:

tvAUC(roc)
#> 
#>  Time-dependent AUC for the Joint Model jointFit
#> 
#> Estimated AUC: 0.8042
#> At time: 8
#> Using information up to time: 5 (202 subjects still at risk)

This function either accepts an object of class tvROC or of class jm. In the latter case, the user must also provide the newdata, Tstart and Dt or Thoriz arguments. Here we have used the same dataset as the one to fit the model, but, in principle, discrimination could be (better) assessed in another dataset.

To assess the accuracy of the predictions we produce a calibration plot:

calibration_plot(jointFit, newdata = pbc2, Tstart = t0, Dt = 3)

The syntax of the calibration_plot() function is almost identical to that of tvROC(). The kernel density estimation is of the estimated probabilities \(\pi_j(t + \Delta t \mid t) = \pi_j(8 \mid 5)\) for all individuals at risk at year t0 in the data frame provided in the newdata argument. Using the calibration_metrics() function we can also calculate metrics for the accuracy of predictions:

calibration_metrics(jointFit, pbc2, Tstart = 5, Dt = 3)
#>        ICI        E50        E90 
#> 0.04122340 0.03436950 0.06498383

The ICI is the mean absolute difference between the observed and predicted probabilities, E50 is the median absolute difference, and E90 is the 90% percentile of the absolute differences. Finally, we calculate the Brier score as an overall measure of predictive performance. This is computed with the tvBrier() function:

tvBrier(jointFit, newdata = pbc2, Tstart = t0, Dt = 3)
#> 
#> Prediction Error for the Joint Model jointFit
#> 
#> Estimated Brier score: 0.1267
#> At time: 8
#> Using information up to time: 5 (202 subjects still at risk)

Notes:

  • To obtain valid estimates of the predictive accuracy measures (i.e., time-varying sensitivity, specificity, and Brier score) we need to account for censoring. A popular method to achieve this is via inverse probability of censoring weighting. For this approach to be valid, we need the model for the weights to be correctly specified. In standard survival analysis, this is achieved either using the Kaplan-Meier estimator or a Cox model for the censoring distribution. However, in the settings where joint models are used, it is often the case that the censoring mechanism may depend on the history of the longitudinal outcomes in a complex manner. This is especially the case when we consider multiple longitudinal outcomes in the analysis. Also, these outcomes may be recorded at different time points per patient and have missing data. Because of these reasons, in these settings, Kaplan-Meier-based or Cox-based censoring weights may be difficult to derive or be biased. The functions in JMbayes2 that calculate the predictive accuracy measures use joint-model-based weights to account for censoring. These weights allow censoring to depend in any possible manner on the history of the longitudinal outcomes. However, they require that the model is appropriately calibrated.
  • The calibration curve, produced by calibration_plot(), and the calibration metrics, produced by calibration_metrics()), are calculated using the procedure described in Austin et al., 2020.