Dynamic Predictions

Theory

Based on the general framework of joint models presented earlier, we are interested in deriving cumulative risk probabilities for a new subject \(j\) that has survived up to time point \(t\) and has provided longitudinal measurements \(\mathcal Y_{kj}(t) = \{ y_{kj}(t_{jl}); 0 \leq t_{jl} \leq t, l = 1, \ldots, n_j, k = 1, \ldots, K\}\), with \(K\) denoting the number of longitudinal outcomes. The probabilities of interest are \[\begin{array}{l} \pi_j(u \mid t) = \mbox{Pr}\{T_j^* \leq u \mid T_j^* > t, \mathcal Y_j(t), \mathcal D_n\}\\\\ = \displaystyle 1 - \int\int \frac{S(u \mid b_j, \theta)}{S(t \mid b_j, \theta)} \; p\{b_j \mid T_j^* > t, \mathcal Y_j(t), \theta\} \; p(\theta \mid \mathcal D_n) \; db_j d\theta, \end{array}\] where \(S(\cdot)\) denotes the survival function conditional on the random effects, and \(\mathcal Y_j(t) = \{\mathcal Y_{1j}(t), \ldots, \mathcal Y_{Kj}(t)\}\). Combining the three terms in the integrand we can device a Monte Carlo scheme to obtain estimates of these probabilities, namely,

  1. Sample a value \(\tilde \theta\) from the posterior of the parameters \([\theta \mid \mathcal D_n]\).

  2. Sample a value \(\tilde b_j\) from the posterior of the random effects \([b_j \mid T_j^* > t, \mathcal Y_j(t), \tilde \theta]\).

  3. Compute the ratio of survival probabilities \(S(u \mid \tilde b_j, \tilde \theta) \Big / S(t \mid \tilde b_j, \tilde \theta)\).

Replicating these steps \(L\) times, we can estimate the conditional cumulative risk probabilities by \[1 - \frac{1}{L} \sum_{l=1}^L \frac{S(u \mid \tilde b_j^{(l)}, \tilde \theta^{(l)})}{S(t \mid \tilde b_j^{(l)}, \tilde \theta^{(l)})},\] and their standard error by calculating the standard deviation across the Monte Carlo samples.

Example

We will illustrate the calculation of dynamic predictions using package JMbayes2 from a trivariate joint model fitted to the PBC dataset for the longitudinal outcomes serBilir (continuous), prothrombin time (continuous) and ascites (dichotomous). We start by fitting the univariate mixed models. For the two continuous outcomes, we allow for nonlinear subject-specific time effects using natural cubic splines. For ascites, we postulate linear subject-specific profiles for the log odds. The code is:

fm1 <- lme(log(serBilir) ~ ns(year, 3) * sex, data = pbc2,
           random = ~ ns(year, 3) | id, control = lmeControl(opt = 'optim'))

fm2 <- lme(prothrombin ~ ns(year, 2) * sex, data = pbc2,
           random = ~ ns(year, 2) | id, control = lmeControl(opt = 'optim'))

fm3 <- mixed_model(ascites ~ year * sex, data = pbc2,
                   random = ~ year | id, family = binomial())

Following, we fit the Cox model for the time to either transplantation or death. The first line defines the composite event indicator, and the second one fits the Cox model in which we have also included the baseline covariates drug and age. The code is:

pbc2.id$event <- as.numeric(pbc2.id$status != "alive")
CoxFit <- coxph(Surv(years, event) ~ drug + age, data = pbc2.id)

The joint model is fitted with the following call to jm():

jointFit <- jm(CoxFit, list(fm1, fm2, fm3), time_var = "year")

We want to calculate predictions for the longitudinal and survival outcomes for Patients 25 and 93. As a first step, we extract the data of these patients and store them in the data.frame ND with the code:

t0 <- 5
ND <- pbc2[pbc2$id %in% c(25, 93), ]
ND <- ND[ND$year < t0, ]
ND$status2 <- 0
ND$years <- t0

We will only use the first five years of follow-up (line three), and further we specify that the patients were event-free up to this time point (lines four and five).

We start with predictions for the longitudinal outcomes. These are produced by the predict() method for class jm objects, and follow the same lines as the procedure described above for cumulative risk probabilities. The only difference is in Step 3, where instead of calculating the cumulative risk we calculate the predicted values for the longitudinal outcomes. There are two options controlled by the type_pred argument, namely predictions at the scale of the response/outcome (default) or at the linear predictor level. The type argument controls if the predictions will be for the mean subject (i.e., including only the fixed effects) or subject-specific including both the fixed and random effects. In the newdata argument we provide the available measurements of the two patients. This will be used to sample their random effects at Step 2 presented above. This is done with a Metropolis-Hastings algorithm that runs for n_mcmc iterations; all iterations but the last one are discarded as burn-in. Finally, argument n_samples corresponds to the value of \(L\) defined above and specifies the number of Monte Carlo samples:

predLong1 <- predict(jointFit, newdata = ND, return_newdata = TRUE)

Argument return_newdata specifies that the predictions are returned as extra columns of the newdata data.frame. By default the 95% credible intervals are also included. Using the plot() method for objects returned by predict.jm(..., return_newdata = TRUE), we can display the predictions. With the following code we do that for the first longitudinal outcome:

plot(predLong1)

When we want to calculate predictions for other, future time points, we can accordingly specify the times argument. In the following example, we calculate predictions from time t0 to time 12:

predLong2 <- predict(jointFit, newdata = ND,
                     times = seq(t0, 12, length.out = 51),
                     return_newdata = TRUE)

We show these predictions for the second outcome and the second patient (i.e., Patient 93). This is achieved by suitably specifying the outcomes and subject arguments of the plot() method:

plot(predLong2, outcomes = 2, subject = 93)

We continue with the predictions for the event outcome. To let predict() know that we want the cumulative risk probabilities, we specify process = "event":

predSurv <- predict(jointFit, newdata = ND, process = "event",
                    times = seq(t0, 12, length.out = 51),
                    return_newdata = TRUE)

The predictions are included again as extra columns in the corresponding data.frame. To depict the predictions of both the longitudinal and survival outcomes combined, we provide both objects to the plot() method:

plot(predLong2, predSurv)

Again by default, the plot is for the predictions of the first subject (i.e., Patient 25) and for the first longitudinal outcome (i.e., log(serBilir)). However, the plot() method has a series of arguments that allows users to customize the plot. We illustrate some of these capabilities with the following figure. First, we specify that we want to depict all three outcomes using outcomes = 1:3 (note: a max of three outcomes can be simultaneously displayed). Next, we specify via the subject argument that we want to show the predictions of Patient 93. Note, that for serum bilirubin we used the log transformation in the specification of the linear mixed model. Hence, we receive predictions on the transformed scale. To show predictions on the original scale, we use the fun_long argument. Because we have three outcomes, this needs to be a list of three functions. The first one, corresponding to serum bilirubin is the exp() and for the other two the identity() because we do not wish to transform the predictions. Analogously, we also have the fun_event argument to transform the predictions for the event outcome, and in the example below we set that we want to obtain survival probabilities. Using the arguments bg, col_points, col_line_long, col_line_event, fill_CI_long, and fill_CI_event we have changed the appearance of the plot to a dark theme. Finally, the pos_ylab_long specifies the relative positive of the y-axis labels for the three longitudinal outcomes.

cols <- c('#F25C78', '#D973B5', '#F28322')
plot(predLong2, predSurv, outcomes = 1:3, subject = 93,
     fun_long = list(exp, identity, identity),
     fun_event = function (x) 1 - x,
     ylab_event = "Survival Probabilities",
     ylab_long = c("Serum Bilirubin", "Prothrombin", "Ascites"),
     bg = '#132743', col_points = cols, col_line_long = cols,
     col_line_event = '#F7F7FF', col_axis = "white", 
     fill_CI_long = c("#F25C7880", "#D973B580", "#F2832280"),
     fill_CI_event = "#F7F7FF80",
     pos_ylab_long = c(1.9, 1.9, 0.08))

Predictive accuracy

We evaluate the discriminative capability of the model using ROC methodology. We calculate the components of the ROC curve using information up to year five, and we are interested in events occurring within a three-year window. That is discriminating between patients who will get the event in the interval (t0, t0 + Dt], (i.e., in our case \(T_j \in (5, 8]\)) from patients who will survive at least 8 years (i.e., \(T_j > 8\)). The calculations are performed with the following call to tvROC():

pbc2$event <- as.numeric(pbc2$status != "alive")
roc <- tvROC(jointFit, newdata = pbc2, Tstart = t0, Dt = 3)
roc
#> 
#>  Time-dependent Sensitivity and Specificity for the Joint Model jointFit
#> 
#> At time: 8
#> Using information up to time: 5 (202 subjects still at risk)
#> 
#>    cut-off         SN         SP        qSN         qSP  
#> 1     0.06 0.04245478 1.00000000 0.03287933 1.000000000  
#> 2     0.07 0.06368217 1.00000000 0.04956683 1.000000000  
#> 3     0.09 0.13824070 0.99039566 0.10270423 0.757490419  
#> 4     0.11 0.15946809 0.99039566 0.12027230 0.784435928  
#> 5     0.14 0.17720224 0.98287707 0.12981598 0.685560692  
#> 6     0.15 0.19842963 0.98287707 0.14780414 0.711763968  
#> 7     0.16 0.21965702 0.98287707 0.16598264 0.733935970  
#> 8     0.19 0.24088441 0.98287707 0.18435453 0.752940544  
#> 9     0.22 0.28333920 0.98287707 0.22169095 0.783822976  
#> 10    0.23 0.28333920 0.97642092 0.21748388 0.719825009  
#> 11    0.25 0.30456659 0.97642092 0.23653506 0.735390286  
#> 12    0.26 0.32579398 0.97642092 0.25579444 0.749317113  
#> 13    0.27 0.34702137 0.96996477 0.27126141 0.711089652  
#> 14    0.32 0.37433642 0.96536013 0.29394390 0.695771598  
#> 15    0.36 0.38028575 0.95425727 0.29275549 0.630398775  
#> 16    0.38 0.40151314 0.95425727 0.31310031 0.644614207  
#> 17    0.39 0.41300182 0.95129532 0.32243639 0.635616827  
#> 18    0.43 0.43422921 0.94483917 0.33938902 0.615776271  
#> 19    0.44 0.47668399 0.94483917 0.38181384 0.640564899  
#> 20    0.46 0.47732481 0.93857792 0.37893889 0.612273095  
#> 21    0.49 0.49855220 0.93857792 0.40063635 0.624022395  
#> 22    0.52 0.51977959 0.93857792 0.42259212 0.635080560  
#> 23    0.54 0.52965629 0.93512569 0.43108126 0.625582558  
#> 24    0.58 0.57211107 0.92866954 0.47296608 0.620822291  
#> 25    0.60 0.60273913 0.92507253 0.50465003 0.621616281  
#> 26    0.63 0.60273913 0.90570408 0.49530380 0.557028461  
#> 27    0.64 0.60799325 0.90084593 0.49882681 0.544792672  
#> 28    0.66 0.62922064 0.89438978 0.51988827 0.536233371  
#> 29    0.68 0.62951537 0.88156712 0.51403965 0.501594951  
#> 30    0.69 0.62951537 0.87511097 0.51086344 0.485151330  
#> 31    0.70 0.63284491 0.86321132 0.50883889 0.458209535  
#> 32    0.71 0.63597971 0.85125244 0.50649598 0.433075346  
#> 33    0.72 0.63597971 0.84479629 0.50316150 0.419423165  
#> 34    0.73 0.63597971 0.83188399 0.49635549 0.393581535  
#> 35    0.74 0.65998105 0.82627151 0.52302897 0.394945617  
#> 36    0.76 0.65998105 0.81335921 0.51631107 0.371642681  
#> 37    0.77 0.66418799 0.80172642 0.51547124 0.354011885  
#> 38    0.78 0.68541538 0.79527027 0.53952106 0.353821788  
#> 39    0.79 0.69553457 0.77252334 0.54102972 0.324260513  
#> 40    0.80 0.71836140 0.76655365 0.56900760 0.326340532  
#> 41    0.82 0.72629360 0.74314156 0.56805709 0.298845892  
#> 42    0.83 0.72999422 0.72489863 0.56367067 0.278305494  
#> 43    0.84 0.72999422 0.71198633 0.55657588 0.263559976  
#> 44    0.85 0.73639713 0.67519682 0.54489077 0.228114791  
#> 45    0.86 0.80007930 0.67519682 0.64575455 0.254429059  
#> 46    0.87 0.80724902 0.63864053 0.63948428 0.223461563  
#> 47    0.88 0.82996019 0.61326721 0.66652388 0.210908847  
#> 48    0.89 0.83566180 0.59563285 0.66803683 0.199194476  
#> 49    0.90 0.85819121 0.58311655 0.70468685 0.197995653  
#> 50    0.91 0.85819121 0.55729195 0.69198521 0.179568567  
#> 51    0.92 0.86201058 0.50680438 0.67207221 0.148499865  
#> 52    0.93 0.88498038 0.44922896 0.69021383 0.123970472  
#> 53    0.94 0.88851980 0.41156854 0.67363769 0.106292074  
#> 54    0.95 0.91060251 0.37309178 0.70873720 0.095460992  
#> 55    0.96 0.91195076 0.30248418 0.65125594 0.066899363  
#> 56    0.97 0.93402438 0.19298699 0.59614921 0.035404565  
#> 57    0.98 0.99950031 0.06440953 0.98990626 0.015680861  
#> 58    0.99 1.00000000 0.01291230 1.00000000 0.003041425

In the first line we define the event indicator as we did in the pbc2.id data.frame. The cut-point with the asterisk on the right maximizes the Youden’s index. To depict the ROC curve, we use the corresponding plot() method:

The area under the ROC curve is calculated with the tvAUC() function:

tvAUC(roc)
#> 
#>  Time-dependent AUC for the Joint Model jointFit
#> 
#> Estimated AUC: 0.8088
#> At time: 8
#> Using information up to time: 5 (202 subjects still at risk)

This function either accepts an object of class tvROC or of class jm. In the latter case, the user must also provide the newdata, Tstart and Dt or Thoriz arguments. Here we have used the same dataset as the one to fit the model, but, in principle, discrimination could be (better) assessed in another dataset.

To assess the accuracy of the predictions we produce a calibration plot:

calibration_plot(jointFit, newdata = pbc2, Tstart = t0, Dt = 3)

The syntax of the calibration_plot() function is almost identical to that of tvROC(). The kernel density estimation is of the estimated probabilities \(\pi_j(t + \Delta t \mid t) = \pi_j(8 \mid 5)\) for all individuals at risk at year t0 in the data frame provided in the newdata argument. Using the calibration_metrics() function we can also calculate metrics for the accuracy of predictions:

calibration_metrics(jointFit, pbc2, Tstart = 5, Dt = 3)
#>        ICI        E50        E90 
#> 0.05763675 0.04720823 0.12893676

The ICI is the mean absolute difference between the observed and predicted probabilities, E50 is the median absolute difference, and E90 is the 90% percentile of the absolute differences. Finally, we calculate the Brier score as an overall measure of predictive performance. This is computed with the tvBrier() function:

tvBrier(jointFit, newdata = pbc2, Tstart = t0, Dt = 3)
#> 
#> Prediction Error for the Joint Model 'jointFit'
#> 
#> Estimated Brier score: 0.1268
#> At time: 8
#> For the 202 subjects at risk at time 5
#> Number of subjects with an event in [5, 8): 40
#> Number of subjects with a censored time in [5, 8): 58
#> Accounting for censoring using model-based weights

The Brier score evaluates the predictive accuracy at time Tstart + Dt. To summarize the predictive accuracy in the interval (t0, t0 + Dt] we can use the integrated Brier score. The corresponding integral is approximated using the Simpson’s rule:

tvBrier(jointFit, newdata = pbc2, Tstart = t0, Dt = 3, integrated = TRUE)
#> 
#> Prediction Error for the Joint Model 'jointFit'
#> 
#> Estimated Integrated Brier score: 0.0829
#> In the time interval: [5, 8)
#> For the 202 subjects at risk at time 5
#> Number of subjects with an event in [5, 8): 40
#> Number of subjects with a censored time in [5, 8): 58
#> Accounting for censoring using model-based weights

The tvBrier() also implements inverse probability of censoring weights to account for censoring in the interval (t0, t0 + Dt] using the Kaplan-Meier estimate of the censoring distribution (however, see the note below):

tvBrier(jointFit, newdata = pbc2, Tstart = t0, Dt = 3, integrated = TRUE,
        type_weights = "IPCW")
#> 
#> Prediction Error for the Joint Model 'jointFit'
#> 
#> Estimated Integrated Brier score: 0.0844
#> In the time interval: [5, 8)
#> For the 202 subjects at risk at time 5
#> Number of subjects with an event in [5, 8): 40
#> Number of subjects with a censored time in [5, 8): 58
#> Accounting for censoring using inverse probability of censoring Kaplan-Meier weights

Notes:

  • To obtain valid estimates of the predictive accuracy measures (i.e., time-varying sensitivity, specificity, and Brier score) we need to account for censoring. A popular method to achieve this is via inverse probability of censoring weighting. For this approach to be valid, we need the model for the weights to be correctly specified. In standard survival analysis, this is achieved either using the Kaplan-Meier estimator or a Cox model for the censoring distribution. However, in the settings where joint models are used, it is often the case that the censoring mechanism may depend on the history of the longitudinal outcomes in a complex manner. This is especially the case when we consider multiple longitudinal outcomes in the analysis. Also, these outcomes may be recorded at different time points per patient and have missing data. Because of these reasons, in these settings, Kaplan-Meier-based or Cox-based censoring weights may be difficult to derive or be biased. The functions in JMbayes2 that calculate the predictive accuracy measures use joint-model-based weights to account for censoring. These weights allow censoring to depend in any possible manner on the history of the longitudinal outcomes. However, they require that the model is appropriately calibrated.
  • The calibration curve, produced by calibration_plot(), and the calibration metrics, produced by calibration_metrics()), are calculated using the procedure described in Austin et al., 2020.