Skip to content

The leap to ordinal: functional prognosis after traumatic brain injury using artificial intelligence

License

Notifications You must be signed in to change notification settings

sbhattacharyay/ordinal_GOSE_prediction

Repository files navigation

DOI

Baseline ordinal prediction of functional outcomes after traumatic brain injury (TBI) in the ICU

The leap to ordinal: Detailed functional prognosis after traumatic brain injury with a flexible modelling approach (https://doi.org/10.1371/journal.pone.0270973)

Contents

Overview

This repository contains the code underlying the article entitled The leap to ordinal: Detailed functional prognosis after traumatic brain injury with a flexible modelling approach from the Collaborative European NeuroTrauma Effectiveness Research in TBI (CENTER-TBI) consortium. In this file, we present the abstract, to outline the motivation for the work and the findings, and then a brief description of the code with which we generate these finding and achieve this objective.

The code on this repository is commented throughout to provide a description of each step alongside the code which achieves it.

Abstract

After a traumatic brain injury (TBI), outcome prognosis within 24 hours of intensive care unit (ICU) admission is essential for baseline risk adjustment and shared decision making. TBI outcomes are commonly categorised by the Glasgow Outcome Scale – Extended (GOSE) into eight, ordered levels of functional recovery at 6 months after injury. Existing ICU prognostic models predict binary outcomes at a certain threshold of GOSE (e.g., functional independence [GOSE > 4]). We aimed to develop ordinal prediction models that concurrently predict probabilities of each GOSE score. From the ICU strata of the Collaborative European NeuroTrauma Effectiveness Research in TBI (CENTER-TBI) project dataset (65 centres), we extracted all baseline clinical information (1,151 predictors) and 6-month GOSE scores from a prospective cohort of 1,550 adult TBI patients. We analysed the effect of two design elements on ordinal model performance: (1) the baseline predictor set, ranging from a concise set of ten validated predictors to a token-embedded representation of all possible predictors, and (2) the modelling strategy, from ordinal logistic regression to multinomial deep learning. With repeated k-fold cross-validation, we found that expanding the baseline predictor set significantly improved ordinal prediction performance while increasing analytical complexity did not. Half of these gains could be achieved with the addition of eight high-impact predictors (2 demographic variables, 4 protein biomarkers, and 2 severity assessments) to the concise set. At best, ordinal models achieved 0.76 (95% CI: 0.74 – 0.77) ordinal discrimination ability (ordinal c-index) and 57% (95% CI: 54% – 60%) explanation of ordinal variation in 6-month GOSE (Somers’ Dxy). Model performance and the effect of expanding the predictor set decreased at higher GOSE thresholds, indicating the difficulty of predicting better functional outcomes shortly after ICU admission. Our results motivate the search for informative predictors that improve confidence in prognosis of higher GOSE and the development of ordinal dynamic prediction models.

Code

All of the code used in this work can be found in the ./scripts directory as Python (.py), R (.R), or bash (.sh) scripts. Moreover, custom classes have been saved in the ./scripts/classes sub-directory, custom functions have been saved in the ./scripts/functions sub-directory, and custom PyTorch models have been saved in the ./scripts/models sub-directory.

In this .py file, we extract the study sample from the CENTER-TBI dataset, filter patients by our study criteria, and convert ICU timestamps to machine-readable format.

In this .py file, we create 100 partitions, stratified by 6-month GOSE, for repeated k-fold cross-validation, and save the splits into a dataframe for subsequent scripts.

In this .R file, we perform multiple imputation with chained equations (MICE, m = 100) on the concise predictor set for CPM training. The training set for each repeated k-fold CV partition is used to train an independent predictive mean matching imputation transformation for that partition. The result is 100 imputations, one for each repeated k-fold cross validation partition.

In this .py file, we define a function to train logistic regression CPMs given the repeated cross-validation dataframe. Then we perform parallelised training of logistic regression CPMs and testing set prediction. Finally, we compile testing set predictions.

In this .py file, we create and save bootstrapping resamples used for all model performance evaluation. We prepare compiled CPM_MNLR and CPM_POLR testing set predictions, and calculate/save performance metrics.

6. Train and optimise CPM_DeepMN and CPM_DeepOR

  1. In this .py file, we first create a grid of tuning configuration-cross-validation combinations and train CPM_DeepMN or CPM_DeepOR models based on provided hyperparameter row index. This is run, with multi-array indexing, on the HPC using a bash script.
  2. In this .py file, we calculate ORC of extant validation predictions, prepare bootstrapping resamples for configuration dropout, and dropout configurations that are consistently (ɑ = .05) inferior in performance.
  3. In this .py file, we calculate ORC in each resample and compare to 'optimal' configuration. This is run, with multi-array indexing, on the HPC using a bash script.

7. Calculate and compile CPM_DeepMN and CPM_DeepOR metrics

  1. In this .py file, we calculate perfomance metrics on resamples. This is run, with multi-array indexing, on the HPC using a bash script.
  2. In this .py file, we compile all CPM_DeepMN and CPM_DeepOR performance metrics and calculate confidence intervals on all CPM performance metrics.

In this .R file, we load and prepare formatted CENTER-TBI predictor tokens. Then, convert formatted predictors to tokens for each repeated cross-validation partition.

In this .py file, we train APM dictionaries per repeated cross-validation partition and convert tokens to indices.

10. Train and optimise APM_MN and APM_OR

  1. In this .py file, we first create a grid of tuning configuration-cross-validation combinations and train APM_MN or APM_OR models based on provided hyperparameter row index. This is run, with multi-array indexing, on the HPC using a bash script.
  2. In this .py file, we calculate ORC of extant validation predictions, prepare bootstrapping resamples for configuration dropout, and dropout configurations that are consistently (ɑ = .05) inferior in performance.
  3. In this .py file, we calculate ORC in each resample and compare to 'optimal' configuration. This is run, with multi-array indexing, on the HPC using a bash script.

11. Calculate and compile APM_MN and APM_OR metrics

  1. In this .py file, we calculate perfomance metrics on resamples. This is run, with multi-array indexing, on the HPC using a bash script.
  2. In this .py file, we compile all APM_MN and APM_OR performance metrics and calculate confidence intervals on all CPM performance metrics.

12. Assess feature significance in APM_MN

  1. In this .py file, we find all top-performing model checkpoint files for SHAP calculation and calculate SHAP values based on given parameters. This is run, with multi-array indexing, on the HPC using a bash script.
  2. In this .py file, we find all files storing calculated SHAP values and create combinations with study GUPIs and compile SHAP values for the given GUPI and output type combination. This is run, with multi-array indexing, on the HPC using a bash script.
  3. In this .py file, we find all files storing GUPI-specific SHAP values and compile and save summary SHAP values across study patient set.
  4. In this .py file, we compile significance weights across trained APMs and summarise significance weights.

In this .R file, we load IMPACT variables from CENTER-TBI, load and prepare the added variables from CENTER-TBI, and multiply impute extended concise predictor set in parallel. The training set for each repeated k-fold CV partition is used to train an independent predictive mean matching imputation transformation for that partition. The result is 100 imputations, one for each repeated k-fold cross validation partition.

In this .py file, we define a function to train logistic regression eCPMs given the repeated cross-validation dataframe. Then we perform parallelised training of logistic regression eCPMs and testing set prediction. Finally, we compile testing set predictions.

In this .py file, we load the common bootstrapping resamples (that will be used for all model performance evaluation), prepare compiled eCPM_MNLR and eCPM_POLR testing set predictions, and calculate/save performance metrics

16. Train and optimise eCPM_DeepMN and eCPM_DeepOR

  1. In this .py file, we first create a grid of tuning configuration-cross-validation combinations and train eCPM_DeepMN or eCPM_DeepOR models based on provided hyperparameter row index. This is run, with multi-array indexing, on the HPC using a bash script.
  2. In this .py file, we calculate ORC of extant validation predictions, prepare bootstrapping resamples for configuration dropout, and dropout configurations that are consistently (ɑ = .05) inferior in performance
  3. In this .py file, we calculate ORC in each resample and compare to 'optimal' configuration. This is run, with multi-array indexing, on the HPC using a bash script.

17. Calculate and compile eCPM_DeepMN and eCPM_DeepOR metrics

  1. In this .py file, we calculate perfomance metrics on resamples. This is run, with multi-array indexing, on the HPC using a bash script.
  2. In this .py file, we compile all eCPM_DeepMN and eCPM_DeepOR performance metrics and calculate confidence intervals on all CPM performance metrics.

In this .py file, we perform ordinal regression analysis on summary characteristics, perform ordinal regression analysis on CPM characteristics, and perform ordinal regression analysis on eCPM characteristics.

In this .R file, we produce the figures for the manuscript and the supplementary figures. The large majority of the quantitative figures in the manuscript are produced using the ggplot package.

Citation

@article{10.1371/journal.pone.0270973,
    doi = {10.1371/journal.pone.0270973},
    author = {Bhattacharyay, Shubhayu AND Milosevic, Ioan AND Wilson, Lindsay AND Menon, David K. AND Stevens, Robert D. AND Steyerberg, Ewout W. AND Nelson, David W. AND Ercole, Ari AND the CENTER-TBI investigators participants},
    journal = {PLOS ONE},
    publisher = {Public Library of Science},
    title = {The leap to ordinal: Detailed functional prognosis after traumatic brain injury with a flexible modelling approach},
    year = {2022},
    month = {07},
    volume = {17},
    url = {https://doi.org/10.1371/journal.pone.0270973},
    pages = {1-29},
    abstract = {When a patient is admitted to the intensive care unit (ICU) after a traumatic brain injury (TBI), an early prognosis is essential for baseline risk adjustment and shared decision making. TBI outcomes are commonly categorised by the Glasgow Outcome Scale–Extended (GOSE) into eight, ordered levels of functional recovery at 6 months after injury. Existing ICU prognostic models predict binary outcomes at a certain threshold of GOSE (e.g., prediction of survival [GOSE > 1]). We aimed to develop ordinal prediction models that concurrently predict probabilities of each GOSE score. From a prospective cohort (n = 1,550, 65 centres) in the ICU stratum of the Collaborative European NeuroTrauma Effectiveness Research in TBI (CENTER-TBI) patient dataset, we extracted all clinical information within 24 hours of ICU admission (1,151 predictors) and 6-month GOSE scores. We analysed the effect of two design elements on ordinal model performance: (1) the baseline predictor set, ranging from a concise set of ten validated predictors to a token-embedded representation of all possible predictors, and (2) the modelling strategy, from ordinal logistic regression to multinomial deep learning. With repeated k-fold cross-validation, we found that expanding the baseline predictor set significantly improved ordinal prediction performance while increasing analytical complexity did not. Half of these gains could be achieved with the addition of eight high-impact predictors to the concise set. At best, ordinal models achieved 0.76 (95% CI: 0.74–0.77) ordinal discrimination ability (ordinal c-index) and 57% (95% CI: 54%– 60%) explanation of ordinal variation in 6-month GOSE (Somers’ Dxy). Model performance and the effect of expanding the predictor set decreased at higher GOSE thresholds, indicating the difficulty of predicting better functional outcomes shortly after ICU admission. Our results motivate the search for informative predictors that improve confidence in prognosis of higher GOSE and the development of ordinal dynamic prediction models.},
    number = {7}
}

About

The leap to ordinal: functional prognosis after traumatic brain injury using artificial intelligence

Resources

License

Stars

Watchers

Forks

Packages

No packages published