Abstract
Introduction
Understanding major adverse cardiac events (MACE) risk is fundamental to improving cardiovascular health. We explored enhancement in risk prediction by integrating a fully automated Coronary Artery Disease Reporting and Data System (CAD-RADS) with patient demographics and detailed anatomical data from Coronary Computed Tomography Angiography (CTCA) scans using a multi-modal deep learning system.
Methods
We employed convolutional neural networks for automated CAD-RADS generation and a gradient-boosting decision tree model to evaluate CAD-RADS' effectiveness in predicting MACE. We then built a multi-modal deep learning system that combined automated CAD-RADS with patient demographics and CTCA-derived segmentation of the left ventricle, aorta, and heart. We evaluated the performance of different models (i.e., expert-generated and fully automated CAD-RADS, and the multimodal model system), using the area under the curve (AUCROC).
Results
Among 995 patients, 639 with both imaging and clinical data (mean age 69.9±8.7 years, 361 males) were studied. Within 30 days, 45 patients experienced MACE. Automated CAD-RADS (AUCROC=0.69) demonstrated comparable performance to expert human assessments (AUCROC =0.67, p-value=0.77), while the multi-modal DL system (AUCROC = 0.821) outperformed CAD-RADS in predicting MACE (p-value=0.02), achieving better sensitivity (0.78) and specificity (0.79).
Conclusions
The novel multi-modal system built using fully automated CAD-RADS and CTCA-derived segmentations, along with patient demographics; outperforms both the human expert-generated and fully automated CAD-RADS for MACE prediction. This approach has the potential to enhance patient outcomes by leveraging the synergy between automated imaging assessments and comprehensive patient data.
Understanding major adverse cardiac events (MACE) risk is fundamental to improving cardiovascular health. We explored enhancement in risk prediction by integrating a fully automated Coronary Artery Disease Reporting and Data System (CAD-RADS) with patient demographics and detailed anatomical data from Coronary Computed Tomography Angiography (CTCA) scans using a multi-modal deep learning system.
Methods
We employed convolutional neural networks for automated CAD-RADS generation and a gradient-boosting decision tree model to evaluate CAD-RADS' effectiveness in predicting MACE. We then built a multi-modal deep learning system that combined automated CAD-RADS with patient demographics and CTCA-derived segmentation of the left ventricle, aorta, and heart. We evaluated the performance of different models (i.e., expert-generated and fully automated CAD-RADS, and the multimodal model system), using the area under the curve (AUCROC).
Results
Among 995 patients, 639 with both imaging and clinical data (mean age 69.9±8.7 years, 361 males) were studied. Within 30 days, 45 patients experienced MACE. Automated CAD-RADS (AUCROC=0.69) demonstrated comparable performance to expert human assessments (AUCROC =0.67, p-value=0.77), while the multi-modal DL system (AUCROC = 0.821) outperformed CAD-RADS in predicting MACE (p-value=0.02), achieving better sensitivity (0.78) and specificity (0.79).
Conclusions
The novel multi-modal system built using fully automated CAD-RADS and CTCA-derived segmentations, along with patient demographics; outperforms both the human expert-generated and fully automated CAD-RADS for MACE prediction. This approach has the potential to enhance patient outcomes by leveraging the synergy between automated imaging assessments and comprehensive patient data.
Original language | English |
---|---|
Article number | 906 |
Pages (from-to) | S542-S543 |
Number of pages | 2 |
Journal | Heart, Lung & Circulation |
Volume | 33 |
Issue number | 4 |
Early online date | 28 Jul 2024 |
DOIs | |
Publication status | Published - Aug 2024 |