The Cognitive Vision in Robotic Surgery Lab is developing computer vision and AI techniques for intraoperative navigation and real-time tissue characterisation.

Head of Group

Dr Stamatia (Matina) Giannarou

411 Bessemer Building
South Kensington Campus

+44 (0) 20 7594 8904

What we do

Surgery is undergoing rapid changes driven by recent technological advances and our on-going pursuit towards early intervention and personalised treatment. We are developing computer vision and Artificial Intelligence techniques for intraoperative navigation and real-time tissue characterisation during minimally invasive and robot-assisted operations to improve both the efficacy and safety of surgical procedures. Our work will revolutionize the treatment of cancers and pave the way for autonomous robot-assisted interventions.

Why it is important?

With recent advances in medical imaging, sensing, and robotics, surgical oncology is entering a new era of early intervention, personalised treatment, and faster patient recovery. The main goal is to completely remove cancerous tissue while minimising damage to surrounding areas. However, achieving this can be challenging, often leading to imprecise surgeries, high re-excision rates, and reduced quality of life due to unintended injuries. Therefore, technologies that enhance cancer detection and enable more precise surgeries may improve patient outcomes.

How can it benefit patients?

Our methods aim to ensure patients receive accurate and timely surgical treatment while reducing surgeons' mental workload, overcoming limitations, and minimizing errors. By improving tumor excision, our hybrid diagnostic and therapeutic tools will lower recurrence rates and enhance survival outcomes. More complete tumor removal will also reduce the need for repeat procedures, improving patient quality of life, life expectancy, and benefiting society and the economy.

Meet the team

Citation

BibTex format

@article{Huang:2022:10.1109/tmrb.2022.3170215,
author = {Huang, B and Nguyen, A and Wang, S and Wang, Z and Mayer, E and Tuch, D and Vyas, K and Giannarou, S and Elson, DS},
doi = {10.1109/tmrb.2022.3170215},
journal = {IEEE Transactions on Medical Robotics and Bionics},
pages = {335--338},
title = {Simultaneous depth estimation and surgical tool segmentation in laparoscopic images},
url = {http://dx.doi.org/10.1109/tmrb.2022.3170215},
volume = {4},
year = {2022}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - Surgical instrument segmentation and depth estimation are crucial steps to improve autonomy in robotic surgery. Most recent works treat these problems separately, making the deployment challenging. In this paper, we propose a unified framework for depth estimation and surgical tool segmentation in laparoscopic images. The network has an encoder-decoder architecture and comprises two branches for simultaneously performing depth estimation and segmentation. To train the network end to end, we propose a new multi-task loss function that effectively learns to estimate depth in an unsupervised manner, while requiring only semi-ground truth for surgical tool segmentation. We conducted extensive experiments on different datasets to validate these findings. The results showed that the end-to-end network successfully improved the state-of-the-art for both tasks while reducing the complexity during their deployment.
AU - Huang,B
AU - Nguyen,A
AU - Wang,S
AU - Wang,Z
AU - Mayer,E
AU - Tuch,D
AU - Vyas,K
AU - Giannarou,S
AU - Elson,DS
DO - 10.1109/tmrb.2022.3170215
EP - 338
PY - 2022///
SN - 2576-3202
SP - 335
TI - Simultaneous depth estimation and surgical tool segmentation in laparoscopic images
T2 - IEEE Transactions on Medical Robotics and Bionics
UR - http://dx.doi.org/10.1109/tmrb.2022.3170215
UR - https://ieeexplore.ieee.org/document/9762754
UR - http://hdl.handle.net/10044/1/97519
VL - 4
ER -

Contact Us

General enquiries

Facility enquiries


The Hamlyn Centre
Bessemer Building
South Kensington Campus
Imperial College
London, SW7 2AZ
Map location