4.7 Article

InstrumentNet: An integrated model for real-time segmentation of intracranial surgical instruments

Journal

COMPUTERS IN BIOLOGY AND MEDICINE
Volume 166, Issue -, Pages -

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.compbiomed.2023.107565

Keywords

Intracranial surgical instrument; Object detection; Image segmentation; Multi-scale feature fusion; Adaptive feature weighting fusion

Ask authors/readers for more resources

In robot-assisted surgery, precise surgical instrument segmentation technology plays a crucial role in facilitating efficient and safe surgical operations. This article introduces an effective surgical instrument segmentation network called InstrumentNet, which utilizes YOLOv7 as the object detection framework to achieve real-time detection. Experimental results demonstrate that the proposed model achieves excellent segmentation performance on surgical instruments compared to other advanced models, highlighting its universality and superiority.
In robot-assisted surgery, precise surgical instrument segmentation technology can provide accurate location and pose data for surgeons, helping them perform a series of surgical operations efficiently and safely. However, there are still some interfering factors, such as surgical instruments being covered by tissue, multiple surgical instruments interlacing with each other, and instrument shaking during surgery. To better address these issues, an effective surgical instrument segmentation network called InstrumentNet is proposed, which adopts YOLOv7 as the object detection framework to achieve a real-time detection solution. Specifically, a multiscale feature fusion network is constructed, which aims to avoid problems such as feature redundancy and feature loss and enhance the generalization ability. Furthermore, an adaptive feature-weighted fusion mechanism is introduced to regulate network learning and convergence. Finally, a semantic segmentation head is introduced to integrate the detection and segmentation functions, and a multitask learning loss function is specifically designed to optimize the surgical instrument segmentation performance. The proposed segmentation model is validated on a dataset of intracranial surgical instruments provided by seven experts from Beijing Tiantan Hospital and achieved an mAP score of 93.5 %, Dice score of 82.49 %, and MIoU score of 85.48 %, demonstrating its universality and superiority. The experimental results demonstrate that the proposed model achieves good segmentation performance on surgical instruments compared to other advanced models and can provide a reference for developing intelligent medical robots.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available