Generalizable Surgical Scene Segmentation of Hyperspectral Images

  • Date in the past
  • Tuesday, 2. July 2024, 10:00
  • Mathematikon, room 04.MC.100
    • Jan Sellner
  • Address

    Mathematikon
    Room 04.MC.100
    (AIH Meeting-Room 4th floor)

  • Organizer

  • Event Type

Generalizable Surgical Scene Segmentation of Hyperspectral Images

This thesis with the title “Generalizable Surgical Scene Segmentation of Hyperspectral Images” addresses the challenge of tissue discrimination during surgery, potentially reducing postoperative complications. It presents a comprehensive approach to automatic surgical scene segmentation with high accuracy using hyperspectral imaging (HSI), which surpasses the limitations of conventional RGB imaging and extends the surgeon’s vision. Spectral analysis demonstrates that the most significant source of variability in spectral data is the tissue under observation rather than specific acquisition conditions. Given the necessity of training numerous segmentation networks, optimized data loading techniques for HSI are introduced which reduce training time and improve GPU utilization. While networks operating on RGB data are well-established, the optimal input representation of HSI data remains an open challenge. A comprehensive validation study in this work finds that HSI outperforms RGB across various spatial granularities of the input data (pixels vs. superpixels vs. patches vs. images). An important part of real-world applicable networks is the generalizability to out-of-distribution data. Hence, this thesis analyses domain shifts induced by individuals, geometrical changes or moving from one species to another. Whereas variations from individuals pose only a minor challenge, a simple, network-independent organ transplantation augmentation is proposed to improve segmentation performance against geometrical domain shifts. Finally, successful knowledge transfer between species is shown via an augmentation applied on the target species based on previously learned linear transformations on the source species.

In summary, the semantic segmentation networks introduced can identify 19 different tissue types in open surgery with high efficiency, robustness to changes in the surgical context and leverage knowledge transfer between species. These findings are supported by thorough validation with large datasets of unprecedented size.