Main content block

Head of Group

Dr George Mylonas

About us

We use perceptual methods, AI, and frugal robotics innovation to deliver transformative diagnostic and treatment solutions.

Research lab info

What we do

The HARMS lab leverages perceptually enabled methodologies, artificial intelligence, and frugal innovation in robotics (such as soft surgical robots) to deliver transformative solutions for diagnosis and treatment. Our research is driven by both problem-solving and curiosity, aiming to build a comprehensive understanding of the actions, interactions, and reactions occurring in the operating room. We focus on using robotic technologies to facilitate procedures that are not yet widely adopted, particularly in endoluminal surgery, such as advanced treatments for gastrointestinal cancer.

Why it is important?

......

How can it benefit patients?

....

Meet the team

Dr Adrian Rubio Solis

Dr Adrian Rubio Solis

Dr Adrian Rubio Solis
Research Associate in Sensing and Machine Learning

Citation

BibTex format

@inproceedings{Dong:2024:10.1109/ICRA57147.2024.10611534,
author = {Dong, B and Chen, J and Wang, Z and Deng, K and Li, Y and Lo, B and Mylonas, G},
doi = {10.1109/ICRA57147.2024.10611534},
pages = {8180--8186},
title = {An Intelligent Robotic Endoscope Control System Based on Fusing Natural Language Processing and Vision Models},
url = {http://dx.doi.org/10.1109/ICRA57147.2024.10611534},
year = {2024}
}

RIS format (EndNote, RefMan)

TY  - CPAPER
AB - In recent years, the area of Robot-Assisted Minimally Invasive Surgery (RAMIS) is standing on the the verge of a new wave of innovations. However, autonomy in RAMIS is still in a primitive stage. Therefore, most surgeries still require manual control of the endoscope and the robotic instruments, resulting in surgeons needing to switch attention between performing surgical procedures and moving endoscope camera. Automation may reduce the complexity of surgical operations and consequently reduce the cognitive load on the surgeon while speeding up the surgical process. In this paper, a hybrid robotic endoscope control system based on fusion model of natural language processing (NLP) and modified YOLO-V8 vision model is proposed. This proposed system can analyze the current surgical workflow and generate logs to summarize the procedure for teaching and providing feedback to junior surgeons. The user study of this system indicated a significant reduction of the number of clutching actions and mean task time, which effectively enhanced the surgical training.
AU - Dong,B
AU - Chen,J
AU - Wang,Z
AU - Deng,K
AU - Li,Y
AU - Lo,B
AU - Mylonas,G
DO - 10.1109/ICRA57147.2024.10611534
EP - 8186
PY - 2024///
SN - 1050-4729
SP - 8180
TI - An Intelligent Robotic Endoscope Control System Based on Fusing Natural Language Processing and Vision Models
UR - http://dx.doi.org/10.1109/ICRA57147.2024.10611534
ER -

Contact Us

General enquiries
hamlyn@imperial.ac.uk

Facility enquiries
hamlyn.facility@imperial.ac.uk


The Hamlyn Centre
Bessemer Building
South Kensington Campus
Imperial College
London, SW7 2AZ
Map location