A-QCF-Net for Unpaired Multimodal Liver Tumor Segmentation
Analysis
This paper addresses the challenge of limited paired multimodal medical imaging datasets by proposing A-QCF-Net, a novel architecture using quaternion neural networks and an adaptive cross-fusion block. This allows for effective segmentation of liver tumors from unpaired CT and MRI data, a significant advancement given the scarcity of paired data in medical imaging. The results demonstrate improved performance over baseline methods, highlighting the potential for unlocking large, unpaired imaging archives.
Key Takeaways
- •Proposes A-QCF-Net, a novel architecture for multimodal medical image segmentation.
- •Addresses the problem of unpaired data in medical imaging.
- •Utilizes quaternion neural networks and an adaptive cross-fusion block.
- •Achieves improved performance over baseline methods on liver tumor segmentation.
- •Demonstrates the potential for utilizing large, unpaired imaging archives.
“The jointly trained model achieves Tumor Dice scores of 76.7% on CT and 78.3% on MRI, significantly exceeding the strong unimodal nnU-Net baseline.”