Multi-Modal Co-Learning for Liver Lesion Segmentation on PET-CT Images

Zhongliang Xue, Ping Li, Liang Zhang, Xiaoyuan Lu, Guangming Zhu, Peiyi Shen, Syed Afaq Ali Shah, Mohammed Bennamoun

Research output: Contribution to journalArticlepeer-review

52 Citations (Scopus)

Abstract

Liver lesion segmentation is an essential process to assist doctors in hepatocellular carcinoma diagnosis and treatment planning. Multi-modal positron emission tomography and computed tomography (PET-CT) scans are widely utilized due to their complementary feature information for this purpose. However, current methods ignore the interaction of information across the two modalities during feature extraction, omit the co-learning of the feature maps of different resolutions, and do not ensure that shallow and deep features complement each others sufficiently. In this paper, our proposed model can achieve feature interaction across multi-modal channels by sharing the down-sampling blocks between two encoding branches to eliminate misleading features. Furthermore, we combine feature maps of different resolutions to derive spatially varying fusion maps and enhance the lesions information. In addition, we introduce a similarity loss function for consistency constraint in case that predictions of separated refactoring branches for the same regions vary a lot. We evaluate our model for liver tumor segmentation using a PET-CT scans dataset, compare our method with the baseline techniques for multi-modal (multi-branches, multi-channels and cascaded networks) and then demonstrate that our method has a significantly higher accuracy ( ${p} < 0.05$ ) than the baseline models.

Original languageEnglish
Pages (from-to)3531-3542
Number of pages12
JournalIEEE Transactions on Medical Imaging
Volume40
Issue number12
DOIs
Publication statusPublished - 1 Dec 2021

Fingerprint

Dive into the research topics of 'Multi-Modal Co-Learning for Liver Lesion Segmentation on PET-CT Images'. Together they form a unique fingerprint.

Cite this