Below is a list of all relevant publications authored by Robotics Forum members.

Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Conference paper
    Vrielink TJCO, Chao M, Darzi A, Mylonas GPet al., 2018,

    ESD CYCLOPS: A new robotic surgical system for GI surgery

    , IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE Computer Soc., Pages: 150-157, ISSN: 1050-4729

    Gastrointestinal (GI) cancers account for 1.5 million deaths worldwide. Endoscopic Submucosal Dissection (ESD) is an advanced therapeutic endoscopy technique with superior clinical outcome due to the minimally invasive and en bloc removal of tumours. In the western world, ESD is seldom carried out, due to its complex and challenging nature. Various surgical systems are being developed to make this therapy accessible, however, these solutions have shown limited operational workspace, dexterity, or low force exertion capabilities. The current paper shows the ESD CYCLOPS system, a bimanual surgical robotic attachment that can be mounted at the end of any flexible endoscope. The system is able to achieve forces of up to 46N, and showed a mean error of 0.217mm during an elliptical tracing task. The workspace and instrument dexterity is shown by pre-clinical ex vivo trials, in which ESD is successfully performed by a GI surgeon. The system is currently undergoing pre-clinical in vivo validation.

  • Conference paper
    Pittiglio G, Kogkas A, Vrielink JO, Mylonas Get al., 2018,

    Dynamic Control of Cable Driven Parallel Robots with Unknown Cable Stiffness: a Joint Space Approach

    , IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE COMPUTER SOC, Pages: 948-955, ISSN: 1050-4729
  • Conference paper
    Runciman M, Darzi A, Mylonas G, 2018,

    Deployable disposable self-propelling and variable stiffness devices for minimally invasive surgery

    , Conference on New Technologies for Computer/Robot Assisted Surgery
  • Conference paper
    Goncalves Nunes U, Demiris Y, 2018,

    3D motion segmentation of articulated rigid bodies based on RGB-D data

    , British Machine Vision Conference (BMVC 2018), Publisher: British Machine Vision Association (BMVA)

    This paper addresses the problem of motion segmentation of articulated rigid bodiesfrom a single-view RGB-D data sequence. Current methods either perform dense motionsegmentation, and consequently are very computational demanding, or rely on sparse 2Dfeature points, which may not be sufficient to represent the entire scene. In this paper,we advocate the use of 3D semi-dense motion segmentation which also bridges somelimitations of standard 2D methods (e.g. background removal). We cast the 3D motionsegmentation problem into a subspace clustering problem, adding an adaptive spectralclustering that estimates the number of object rigid parts. The resultant method has fewparameters to adjust, takes less time than the temporal length of the scene and requiresno post-processing.

  • Conference paper
    Saputra RP, Kormushev P, 2018,

    Casualty detection for mobile rescue robots via ground-projected point clouds

    , Towards Autonomous Robotic Systems (TAROS) 2018, Publisher: Springer, Cham, Pages: 473-475, ISSN: 0302-9743

    In order to operate autonomously, mobile rescue robots needto be able to detect human casualties in disaster situations. In this paper,we propose a novel method for autonomous detection of casualties lyingdown on the ground based on point-cloud data. This data can be obtainedfrom different sensors, such as an RGB-D camera or a 3D LIDAR sensor.The method is based on a ground-projected point-cloud (GPPC) imageto achieve human body shape detection. A preliminary experiment hasbeen conducted using the RANSAC method for floor detection and, theHOG feature and the SVM classifier to detect human body shape. Theresults show that the proposed method succeeds to identify a casualtyfrom point-cloud data in a wide range of viewing angles.

  • Conference paper
    Pardo F, Tavakoli A, Levdik V, Kormushev Pet al., 2018,

    Time limits in reinforcement learning

    , International Conference on Machine Learning, Pages: 4042-4051

    In reinforcement learning, it is common to let anagent interact for a fixed amount of time with itsenvironment before resetting it and repeating theprocess in a series of episodes. The task that theagent has to learn can either be to maximize itsperformance over (i) that fixed period, or (ii) anindefinite period where time limits are only usedduring training to diversify experience. In thispaper, we provide a formal account for how timelimits could effectively be handled in each of thetwo cases and explain why not doing so can causestate-aliasing and invalidation of experience re-play, leading to suboptimal policies and traininginstability. In case (i), we argue that the termi-nations due to time limits are in fact part of theenvironment, and thus a notion of the remainingtime should be included as part of the agent’s in-put to avoid violation of the Markov property. Incase (ii), the time limits are not part of the envi-ronment and are only used to facilitate learning.We argue that this insight should be incorporatedby bootstrapping from the value of the state atthe end of each partial episode. For both cases,we illustrate empirically the significance of ourconsiderations in improving the performance andstability of existing reinforcement learning algo-rithms, showing state-of-the-art results on severalcontrol tasks.

  • Conference paper
    Cully AHR, Demiris Y, 2018,

    Hierarchical behavioral repertoires with unsupervised descriptors

    , Genetic and Evolutionary Computation Conference 2018, Publisher: ACM

    Enabling artificial agents to automatically learn complex, versatile and high-performing behaviors is a long-lasting challenge. This paper presents a step in this direction with hierarchical behavioral repertoires that stack several behavioral repertoires to generate sophisticated behaviors. Each repertoire of this architecture uses the lower repertoires to create complex behaviors as sequences of simpler ones, while only the lowest repertoire directly controls the agent's movements. This paper also introduces a novel approach to automatically define behavioral descriptors thanks to an unsupervised neural network that organizes the produced high-level behaviors. The experiments show that the proposed architecture enables a robot to learn how to draw digits in an unsupervised manner after having learned to draw lines and arcs. Compared to traditional behavioral repertoires, the proposed architecture reduces the dimensionality of the optimization problems by orders of magnitude and provides behaviors with a twice better fitness. More importantly, it enables the transfer of knowledge between robots: a hierarchical repertoire evolved for a robotic arm to draw digits can be transferred to a humanoid robot by simply changing the lowest layer of the hierarchy. This enables the humanoid to draw digits although it has never been trained for this task.

  • Journal article
    Kucukyilmaz A, Demiris Y, 2018,

    Learning shared control by demonstration for personalized wheelchair assistance

    , IEEE Transactions on Haptics, Vol: 11, Pages: 431-442, ISSN: 1939-1412

    An emerging research problem in assistive robotics is the design of methodologies that allow robots to provide personalized assistance to users. For this purpose, we present a method to learn shared control policies from demonstrations offered by a human assistant. We train a Gaussian process (GP) regression model to continuously regulate the level of assistance between the user and the robot, given the user's previous and current actions and the state of the environment. The assistance policy is learned after only a single human demonstration, i.e., in one-shot. Our technique is evaluated in a one-of-a-kind experimental study, where the machine-learned shared control policy is compared to human assistance. Our analyses show that our technique is successful in emulating human shared control, by matching the location and amount of offered assistance on different trajectories. We observed that the effort requirement of the users were comparable between human-robot and human-human settings. Under the learned policy, the jerkiness of the user's joystick movements dropped significantly, despite a significant increase in the jerkiness of the robot assistant's commands. In terms of performance, even though the robotic assistance increased task completion time, the average distance to obstacles stayed in similar ranges to human assistance.

  • Conference paper
    Saputra RP, Kormushev P, 2018,

    ResQbot: a mobile rescue robot with immersive teleperception for casualty extraction

    , Towards Autonomous Robotic Systems (TAROS) 2018, Publisher: Springer International Publishing AG, part of Springer Nature, Pages: 209-220, ISSN: 0302-9743

    In this work, we propose a novel mobile rescue robot equipped with an immersive stereoscopic teleperception and a teleoperation control. This robot is designed with the capability to perform safely a casualty-extraction procedure. We have built a proof-of-concept mobile rescue robot called ResQbot for the experimental platform. An approach called “loco-manipulation” is used to perform the casualty-extraction procedure using the platform. The performance of this robot is evaluated in terms of task accomplishment and safety by conducting a mock rescue experiment. We use a custom-made human-sized dummy that has been sensorised to be used as the casualty. In terms of safety, we observe several parameters during the experiment including impact force, acceleration, speed and displacement of the dummy’s head. We also compare the performance of the proposed immersive stereoscopic teleperception to conventional monocular teleperception. The results of the experiments show that the observed safety parameters are below key safety thresholds which could possibly lead to head or neck injuries. Moreover, the teleperception comparison results demonstrate an improvement in task-accomplishment performance when the operator is using the immersive teleperception.

  • Conference paper
    Wang K, Shah A, Kormushev P, 2018,

    SLIDER: a novel bipedal walking robot without knees

    , Towards Autonomous Robotic Systems (TAROS) 2018, Publisher: Springer International Publishing AG, part of Springer Nature, Pages: 471-472, ISSN: 0302-9743

    In this work, we propose a novel mobile rescue robot equipped with an immersive stereoscopic teleperception and a teleoperation control. This robot is designed with the capability to perform safely a casualty-extraction procedure. We have built a proof-of-concept mobile rescue robot called ResQbot for the experimental platform. An approach called “loco-manipulation” is used to perform the casualty-extraction procedure using the platform. The performance of this robot is evaluated in terms of task accomplishment and safety by conducting a mock rescue experiment. We use a custom-made human-sized dummy that has been sensorised to be used as the casualty. In terms of safety, we observe several parameters during the experiment including impact force, acceleration, speed and displacement of the dummy’s head. We also compare the performance of the proposed immersive stereoscopic teleperception to conventional monocular teleperception. The results of the experiments show that the observed safety parameters are below key safety thresholds which could possibly lead to head or neck injuries. Moreover, the teleperception comparison results demonstrate an improvement in task-accomplishment performance when the operator is using the immersive teleperception.

  • Conference paper
    Fischer T, Demiris Y, 2018,

    A computational model for embodied visual perspective taking: from physical movements to mental simulation

    , Vision Meets Cognition Workshop at CVPR 2018

    To understand people and their intentions, humans have developed the ability to imagine their surroundings from another visual point of view. This cognitive ability is called perspective taking and has been shown to be essential in child development and social interactions. However, the precise cognitive mechanisms underlying perspective taking remain to be fully understood. Here we present a computa- tional model that implements perspective taking as a mental simulation of the physical movements required to step into the other point of view. The visual percept after each mental simulation step is estimated using a set of forward models. Based on our experimental results, we propose that a visual attention mechanism explains the response times reported in human visual perspective taking experiments. The model is also able to generate several testable predictions to be explored in further neurophysiological studies.

  • Conference paper
    Bodin B, Nardi L, Wagstaff H, Kelly PHJ, O'Boyle Met al., 2018,

    Algorithmic Performance-Accuracy Trade-off in 3D Vision Applications

    , Pages: 123-124

    Simultaneous Localisation And Mapping (SLAM) is a key component of robotics and augmented reality (AR) systems. While a large number of SLAM algorithms have been presented, there has been little effort to unify the interface of such algorithms, or to perform a holistic comparison of their capabilities. This is particularly true when it comes to evaluate the potential trade-offs between computation speed, accuracy, and power consumption. SLAMBench is a benchmarking framework to evaluate existing and future SLAM systems, both open and closed source, over an extensible list of datasets, while using a comparable and clearly specified list of performance metrics. SLAMBench is a publicly-available software framework which represents a starting point for quantitative, comparable and validatable experimental research to investigate trade-offs in performance, accuracy and energy consumption across SLAM systems. In this poster we give an overview of SLAMBench and in particular we show how this framework can be used within Design Space Exploration and large-scale performance evaluation on mobile phones.

  • Conference paper
    Elsdon J, Demiris Y, 2018,

    Augmented reality for feedback in a shared control spraying task

    , IEEE International Conference on Robotics and Automation (ICRA), Publisher: Institute of Electrical and Electronics Engineers (IEEE), Pages: 1939-1946, ISSN: 1050-4729

    Using industrial robots to spray structures has been investigated extensively, however interesting challenges emerge when using handheld spraying robots. In previous work we have demonstrated the use of shared control of a handheld spraying robot to assist a user in a 3D spraying task. In this paper we demonstrate the use of Augmented Reality Interfaces to increase the user's progress and task awareness. We describe our solutions to challenging calibration issues between the Microsoft Hololens system and a motion capture system without the need for well defined markers or careful alignment on the part of the user. Error relative to the motion capture system was shown to be 10mm after only a 4 second calibration routine. Secondly we outline a logical approach for visualising liquid density for an augmented reality spraying task, this system allows the user to see target regions to complete, areas that are complete and areas that have been overdosed clearly. Finally we produced a user study to investigate the level of assistance that a handheld robot utilising shared control methods should provide during a spraying task. Using a handheld spraying robot with a moving spray head did not aid the user much over simply actuating spray nozzle for them. Compared to manual control the automatic modes significantly reduced the task load experienced by the user and significantly increased the quality of the result of the spraying task, reducing the error by 33-45%.

  • Conference paper
    Bodin B, Wagstaff H, Saeedi S, Nardi L, Vespa E, Mawer J, Nisbet A, Lujan M, Furber S, Davison AJ, Kelly PHJ, O'Boyle MFPet al., 2018,

    SLAMBench2: multi-objective head-to-head benchmarking for visual SLAM

    , IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 3637-3644, ISSN: 1050-4729

    SLAM is becoming a key component of robotics and augmented reality (AR) systems. While a large number of SLAM algorithms have been presented, there has been little effort to unify the interface of such algorithms, or to perform a holistic comparison of their capabilities. This is a problem since different SLAM applications can have different functional and non-functional requirements. For example, a mobile phone-based AR application has a tight energy budget, while a UAV navigation system usually requires high accuracy. SLAMBench2 is a benchmarking framework to evaluate existing and future SLAM systems, both open and close source, over an extensible list of datasets, while using a comparable and clearly specified list of performance metrics. A wide variety of existing SLAM algorithms and datasets is supported, e.g. ElasticFusion, InfiniTAM, ORB-SLAM2, OKVIS, and integrating new ones is straightforward and clearly specified by the framework. SLAMBench2 is a publicly-available software framework which represents a starting point for quantitative, comparable and val-idatable experimental research to investigate trade-offs across SLAM systems.

  • Journal article
    Miyashita K, Oude Vrielink T, Mylonas G, 2018,

    A cable-driven parallel manipulator with force sensing capabilities for high-accuracy tissue endomicroscopy

    , International Journal of Computer Assisted Radiology and Surgery, Vol: 13, Pages: 659-669, ISSN: 1861-6429

    PURPOSE: Endomicroscopy (EM) provides high resolution, non-invasive histological tissue information and can be used for scanning of large areas of tissue to assess cancerous and pre-cancerous lesions and their margins. However, current robotic solutions do not provide the accuracy and force sensitivity required to perform safe and accurate tissue scanning. METHODS: A new surgical instrument has been developed that uses a cable-driven parallel mechanism (CPDM) to manipulate an EM probe. End-effector forces are determined by measuring the tensions in each cable. As a result, the instrument allows to accurately apply a contact force on a tissue, while at the same time offering high resolution and highly repeatable probe movement. RESULTS: 0.2 and 0.6 N force sensitivities were found for 1 and 2 DoF image acquisition methods, respectively. A back-stepping technique can be used when a higher force sensitivity is required for the acquisition of high quality tissue images. This method was successful in acquiring images on ex vivo liver tissue. CONCLUSION: The proposed approach offers high force sensitivity and precise control, which is essential for robotic EM. The technical benefits of the current system can also be used for other surgical robotic applications, including safe autonomous control, haptic feedback and palpation.

  • Journal article
    Matheson E, Secoli R, Burrows C, Leibinger A, Rodriguez y Baena Fet al., 2018,

    Cyclic motion control for programmable bevel-tip needles to reduce tissue deformation

    , Journal of Medical Robotics Research, Vol: 4, ISSN: 2424-905X

    Robotic-assisted steered needles aim to accurately control the deflection of the flexible needle’s tip to achieve accurate path following. In doing so, they can decrease trauma to the patient, by avoiding sensitive regions while increasing placement accuracy. This class of needle presents more complicated kinematics compared to straight needles, which can be exploited to produce specific motion profiles via careful controller design and tuning. Motion profiles can be optimized to minimize certain conditions such as maximum tissue deformation and target migration, which was the goal of the formalized cyclic, low-level controller for a Programmable Bevel-tip Needle (PBN) presented in this work. PBNs are composed of a number of interlocked segments that are able to slide with respect to one another. Producing a controlled, desired offset of the tip geometry leads to the corresponding desired curvature of the PBN, and hence desired path trajectory of the system. Here, we propose a cyclical actuation strategy, where the tip configuration is achieved over a number of reciprocal motion cycles, which we hypothesize will reduce tissue deformation during the insertion process. A series of in vitro, planar needle insertion experiments are performed in order to compare the cyclic controller performance with the previously used direct push controller, in terms of targeting accuracy and tissue deformation. It is found that there is no significant difference between the target tracking performance of the controllers, but a significant decrease in axial tissue deformation when using the cyclic controller.

  • Conference paper
    Avila Rencoret FB, Mylonas G, Elson D, 2018,

    Robotic wide-field optical biopsy endoscopy

    , OSA Biophotonics Congress 2018, Publisher: OSA publishing

    This paper describes a novel robotic framework for wide-field optical biopsy endoscopy, characterizes in vitro its spatial and spectral resolution, real time hyperspectral tissue classification, and demonstrates its feasibility on fresh porcine cadaveric colon.

  • Journal article
    Vespa E, Nikolov N, Grimm M, Nardi L, Kelly PH, Leutenegger Set al., 2018,

    Efficient octree-based volumetric SLAM supporting signed-distance and occupancy mapping

    , IEEE Robotics and Automation Letters, Vol: 3, Pages: 1144-1151, ISSN: 2377-3766

    We present a dense volumetric simultaneous localisation and mapping (SLAM) framework that uses an octree representation for efficient fusion and rendering of either a truncated signed distance field (TSDF) or an occupancy map. The primary aim of this letter is to use one single representation of the environment that can be used not only for robot pose tracking and high-resolution mapping, but seamlessly for planning. We show that our highly efficient octree representation of space fits SLAM and planning purposes in a real-time control loop. In a comprehensive evaluation, we demonstrate dense SLAM accuracy and runtime performance on-par with flat hashing approaches when using TSDF-based maps, and considerable speed-ups when using occupancy mapping compared to standard occupancy maps frameworks. Our SLAM system can run at 10-40 Hz on a modern quadcore CPU, without the need for massive parallelization on a GPU. We, furthermore, demonstrate a probabilistic occupancy mapping as an alternative to TSDF mapping in dense SLAM and show its direct applicability to online motion planning, using the example of informed rapidly-exploring random trees (RRT*).

  • Journal article
    Fischer T, Puigbo J-Y, Camilleri D, Nguyen PDH, Moulin-Frier C, Lallee S, Metta G, Prescott TJ, Demiris Y, Verschure Pet al., 2018,

    iCub-HRI: A software framework for complex human-robot interaction scenarios on the iCub humanoid robot

    , Frontiers in Robotics and AI, Vol: 5, Pages: 1-9, ISSN: 2296-9144

    Generating complex, human-like behaviour in a humanoid robot like the iCub requires the integration of a wide range of open source components and a scalable cognitive architecture. Hence, we present the iCub-HRI library which provides convenience wrappers for components related to perception (object recognition, agent tracking, speech recognition, touch detection), object manipulation (basic and complex motor actions) and social interaction (speech synthesis, joint attention) exposed as a C++ library with bindings for Java (allowing to use iCub-HRI within Matlab) and Python. In addition to previously integrated components, the library allows for simple extension to new components and rapid prototyping by adapting to changes in interfaces between components. We also provide a set of modules which make use of the library, such as a high-level knowledge acquisition module and an action recognition module. The proposed architecture has been successfully employed for a complex human-robot interaction scenario involving the acquisition of language capabilities, execution of goal-oriented behaviour and expression of a verbal narrative of the robot's experience in the world. Accompanying this paper is a tutorial which allows a subset of this interaction to be reproduced. The architecture is aimed at researchers familiarising themselves with the iCub ecosystem, as well as expert users, and we expect the library to be widely used in the iCub community.

  • Conference paper
    Saputra RP, Kormushev P, 2018,

    ResQbot: A mobile rescue robot for casualty extraction

    , 2018 ACM/IEEE International Conference on Human-Robot Interaction (HRI 2018), Publisher: Association for Computing Machinery, Pages: 239-240

    Performing search and rescue missions in disaster-struck environments is challenging. Despite the advances in the robotic search phase of the rescue missions, few works have been focused on the physical casualty extraction phase. In this work, we propose a mobile rescue robot that is capable of performing a safe casualty extraction routine. To perform this routine, this robot adopts a loco-manipulation approach. We have designed and built a mobile rescue robot platform called ResQbot as a proof of concept of the proposed system. We have conducted preliminary experiments using a sensorised human-sized dummy as a victim, to confirm that the platform is capable of performing a safe casualty extraction procedure.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://www.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=1128&limit=20&page=7&respub-action=search.html Current Millis: 1732194843050 Current Time: Thu Nov 21 13:14:03 GMT 2024

Robotics Forum Annual Report

Join our mailing list - sharing robotics-related activities at Imperial. 

Contact Us

Robotics Forum

For all enquiries, please contact our Forum Manager,  Dr Ana Cruz Ruiz

robotics-manager@imperial.ac.uk