SPA: Efficient User-Preference Alignment against Uncertainty in Medical Image Segmentation

Jiayuan Zhu*, Junde Wu*, Cheng Ouyang, Konstantinos Kamnitsas, J. Alison Noble
University of Oxford
*equal contribution
MY ALT TEXT

Our uncertainty-aware interactive segmentation model, SPA, efficiently achieves segmentations whose decisions on uncertain pixels are aligned with users preferences. This is achieved by modeling uncertainties and human interactions. At inference time, users are presented with one recommended prediction and a few representative segmentations that capture uncertainty, allowing users to select the one best aligned with their clinical needs. If the user is unsatisfied with the recommended prediction, the model learns from the users' selections, adapts itself, and presents users a new set of representative segmentations. Our approach minimizes user interactions and eliminates the need for painstaking pixel-wise adjustments compared to conventional interactive segmentation models.

Abstract

Medical image segmentation data inherently contain uncertainty. This can stem from both imperfect image quality and variability in labeling preferences on ambiguous pixels, which depend on annotator expertise and the clinical context of the annotations. For instance, a boundary pixel might be labeled as tumor in diagnosis to avoid under-estimation of severity, but as normal tissue in radiotherapy to prevent damage to sensitive structures. As segmentation preferences vary across downstream applications, it is often desirable for an image segmentation model to offer user-adaptable predictions rather than a fixed output. While prior uncertainty-aware and interactive methods offer adaptability, they are inefficient at test time: uncertainty-aware models require users to choose from numerous similar outputs, while interactive models demand significant user input through click or box prompts to refine segmentation. To address these challenges, we propose SPA, a new Segmentation Preference Alignment framework that efficiently adapts to diverse test-time preferences with minimal human interaction. By presenting users with a select few, distinct segmentation candidates that best capture uncertainties, it reduces the user workload to reach the preferred segmentation. To accommodate user preference, we introduce a probabilistic mechanism that leverages user feedback to adapt a model's segmentation preference. The proposed framework is evaluated on several medical image segmentation tasks: color fundus images, lung lesion and kidney CT scans, MRI scans of brain and prostate. SPA shows 1) a significant reduction in user time and effort compared to existing interactive segmentation approaches, 2) strong adaptability based on human feedback, and 3) state-of-the-art image segmentation performance across different imaging modalities and semantic labels.

SPA Performance & Visualization


Performance analysis comparing Dice Scores across deterministic, uncertainty-aware, and interactive models. SAM-series models use clicks for interaction, while SAM-U uses bounding boxes. SPA, with its multi-choice correction proposal setting, consistently outperforms all models across diverse datasets. 1-Iter and 3-Iter indicate performance after one and three iterations, respectively.
Visual comparison of segmentation results with deterministic, uncertainty-aware, and interactive models after six iterations. SPA provides better adaptability, particularly at boundary regions.

SPA Demonstrates Extraordinary Efficiency


Efficiency analysis comparing the average number of iterations required to reach specific Dice Scores across interactive models. Models that failed to reach the target Dice Score within six iterations are assigned an iteration count of ten. SPA consistently requires fewer iterations to achieve high-performance segmentation results.

BibTeX

@misc{zhu_spa_2024,
      title={SPA: Efficient User-Preference Alignment against Uncertainty in Medical Image Segmentation},
      author={Zhu, Jiayuan and Wu, Junde and Ouyang, Cheng and Kamnitsas, Konstantinos and Noble, Alison},
      url = {http://arxiv.org/abs/2411.15513},
      doi = {10.48550/arXiv.2411.15513},
      year = {2024},
    }