Overall Program CRAS 2020
09.45 - 10.00 Opening Session
10.00 - 11.00 Oral 1 Oral 3 Interactive 3
11.00 - 12.00 Keynote - C. Huet Interactive 1 Interactive 4
12.00 - 13.00
13.00 - 14.00
14.00 - 15.00 Keynote - D. Rus Keynote -A. Okamura
15.00 - 16.00 Oral 2 Interactive 2
16.00 - 17.00  Oral 4
17.00 - 18.00 Keynote- B. Hannaford
18.00 - 18.30 Awards & Closure
Oral 1 - Monday 28.09 - Novel Medical Devices and New Robotic Systems
10.00 - 10.15 Nabeel Kamal, Armando Cama and Leonardo S. Mattos. Pediatric Neurosurgical Robot: From Requirements to Development
10.15 - 10.30 Lukas Lindenroth, Agostino Stilli, Saurabh Singh, Steve Bandula and Danail Stoyanov. A fluid-driven mechanism for percutaneous needle insertion and fine-manipulation under image guidance
10.30 - 10.45 Andre Geraldes, Paolo Fiorini and Leonardo Mattos. Towards a Compact Auto-Focusing System for Endoscopic Laser Surgery
10.45 - 11.00 Zhuoqi Cheng, Veronica Penza, Jesús Ortiz, Darwin Caldwell and Leonardo Mattos. Vision-guided Autonomous Robotic Electrical Bio-Impedance Scanning System for Abnormal Tissue Detection and Tracking

back to top

Oral 2 - Monday 28.09 - Robot Control and Autonomy
15.00 - 15.15 Beatriz Farola Barata, Gianni Borghesan, Diego Dall'Alba, Jos Vander Sloten and Emmanuel Vander Poorten. Constraint-Based Control of a Distally Actuated Continuum Robot
15.15 - 15.30 Claudia Pecorella, Bruno Siciliano and Fanny Ficuciello. Manipulability Analysis using a Coordinate Invariant Index with a Remote Center of Motion constraint in Surgical Robotics
15.30 - 15.45 Narcís Sayols Baixeras, Alessio Sozzi, Nicola Piccinelli, Albert Hernansanz, Alícia Casals, Marcello Bonfè and Riccardo Muradore. A hFSM based cognitive control architecture for assistive task in R-MIS
15.45 - 16.00 Fabio Falezza, Nicola Piccinelli, Andrea Roberti, Francesco Setti, Riccardo Muradore and Paolo Fiorini. A supervisory controller for semi-autonomous surgical interventions

back to top

Oral 3 - Tuesday 29.09 - Novel Sensing and Vision Based Modeling
10.00 - 10.15 Marco Piccinelli, Eleonora Tagliabue, Diego Dall'Alba and Paolo Fiorini. Vision-based estimation of deformation properties for autonomous soft tissues manipulation
10.15 - 10.30 Guiqiu Liao, Oscar Caravaca Mora, Benoit Rosa, Diego Dall'Alba, Paolo Fiorini, Michel Mathelin, Florent Nageotte and Michalina Gora. Endoscopic Optical Coherence Tomography Volumetric Scanning Method with Deep Frame Stream Stabilization
10.30 - 10.45 Keshav Iyengar and Danail Stoyanov. Deep Reinforcement Learning for Concentric Tube Robot Control with Goal Based Curriculum Reward
10.45 - 11.00 Emanuele Colleoni and Danail Stoyanov. A cycle-GAN based transfer learning approach for surgical image synthesis

back to top

Oral 4 - Wednesday 30.09 - Development and Validation of Surgical Training Systems
16.00 - 16.15 Leonardo S. Mattos, Alperen Acemoglu and Darwin G. Caldwell. 5G Telesurgery – Feasibility Experiment and First Public Demo
16.15 - 16.30 Sabina Maglio, Arianna Menciassi and Selene Tognarelli. Validation of a high-fidelity neonatal pneumothorax simulator with expert neonatologists
16.30 - 16.45 Vladimir Poliakov, Dzmitry Tsetserukou, Kenan Niu and Emmanuel Vander Poorten. VR Simulation System for In-Office Hysteroscopy: A Pilot Study
16.45 - 17.00 Carlotta Luchini, Selene Tognarelli and Arianna Menciassi. High-Fidelity simulator of cervix changes during labour

back to top

Interactive 1 - Tuesday 29.09 - Development of training systems, novel medical devices and robotic systems
11.00 - 11.08 Arie Adriaensen. Systems-Theoretic Process Analysis applied as a socio-technical safety analysis to support future fetoscope design
11.09 - 11.16 Josep Amat, Alicia Casals and Manel Frigola. Bitrack: a friendly four arms robot for laparoscopic surgery
11.17 - 11.24 Chun-Feng Lai, Fabian Trauzettel, Paul Breedveld, Elena De Momi, Giancarlo Ferrigno and Jenny Dankelman. Modularization of medical robotic manipulator for adapting soft robotic arms with varying numbers of DOFs
11.25 - 11.32 Fabio Tatti, Hisham Iqbal, Branislav Jaramaz and Ferdinando Rodriguez Y Baena. Personalised Computer-Assisted Treatment of Knee Osteochondral Lesions Enhanced with Augmented Reality. A Proof of Concept
11.33 - 11.40 Zhuoqi Cheng, Andrea Luigi Camillo Carobbio, Lara Soggiu, Marco Migliorini, Luca Guastini, Francesco Mora, Marco Fragale, Alessandro Ascoli, Stefano Africano, Darwin Caldwell, Frank Rikki Mauritz Canevari, Giampiero Parrinello, Giorgio Peretti and Leonardo Mattos. SmartProbe: an EBI sensing system for head and neck cancer tissue detection
11.41 - 11.48 Vanni Consumi, Lukas Lindenroth, Danail Stoyanov and Agostino Stilli. SoftSCREEN – Soft Shape-shifting Capsule Robot for Endoscopy based on Eversion Navigation
11.49 - 11.56 Yarden Sharon, Daniel Naftalovich, Lidor Bahar, Hanna Kossowsky, Yael Refaely and Ilana Nisky. Preliminary Analysis of Learning a Robot-Assisted Surgical Pattern-Cutting

back to top

Interactive 2 - Tuesday 29.09 - Surgical Robot Control and Autonomous Robotics Surgery
15.00 - 15.08 Jef De Smet, Emmanuel Vander Poorten and Jan Deprest. Force-based Endoscope Positioning for Laparoscopic Sacrocolpopexy
15.09 - 15.16 Tomàs Pieras, Albert Hernansanz, Narcís Sayols, Johanna Parra, Elisenda Eixarc, Eduard Gratacós and Alícia Casals. Multi-task control strategy exploiting redundancy in RMIS
15.17 - 15.24 Florian De Clercq, Gianni Borghesan, Emmanuel Vander Poorten and Wouter Oosterlinck. Unscented Kalman filtering (UKF) with electrocardiogram (ECG) sensor fusion for heart state estimation and motion-compensation
15.25 - 15.32 Marco Bombieri, Diego Dall'Alba, Sanat Ramesh, Giovanni Menegozzo and Paolo Fiorini. Angular metrics and an effort based metric used as features for an automatic classifying algorithm of surgical gestures
15.33 - 15.40 Claudia D'Ettorre, Neri Niccolo Dei, Siva Krishnan, Silvia Zirino, Georgia Chalvatzaki, Agostino Stilli and Danail Stoyanov. Flexible Framework for Reinforcement Learning in Surgical Robotics
15.41 - 15.48 Emmanouil Dimitrakakis, George Dwyer, Lukas Lindenroth, Holly Aylmore, Neil Dorward, Hani Marcus and Danail Stoyanov. Towards the Development and Evaluation of a Handle Prototype for a Handheld Robotic Neurosurgical Instrument

back to top

Interactive 3 - Wednesday 30.09 - Vision-based Modeling and Reconstruction
10.00 - 10.08 Kenan Niu, Julie Legrand, Laura Van Gerven and Emmanuel Vander Poorten. Statistical Shape Modelling of the Human Nasal Cavity and Maxillary Sinus for Minimally Invasive Surgery
10.09 - 10.16 Cristina Iacono, Rocco Moccia, Bruno Siciliano and Fanny Ficuciello. Vision-Based Dynamic Virtual Fixtures for Tools Collision Avoidance in MIRS
10.17 - 10.24 Jorge Lazo, Sara Moccia and Aldo Marzullo. A Lumen Segmentation Method in Ureteroscopy Images Based on Deep Leaning
10.25 - 10.32 Ruixuan Li, Kenan Niu, Di Wu and Emmanuel Vander Poorten. A Framework of Real-time Freehand Ultrasound Reconstruction based on Deep Learning for Spine Surgery
10.33 - 10.40 Ameya Pore, Eleonora Tagliabue, Diego Dall'Alba and Paolo Fiorini. Framework for soft tissue manipulation and control using Deep Reinforcement Learning
10.41 - 10.48 Francesco Cursi and Petar Kormushev. Adaptive Neural Network for Modelling and Control of a Surgical Robotic Tool attached to a Serial-Link Manipulator
10.49 - 10.56 Di Wu, Mouloud Ourak, Mirza Awais Ahmad, Kenan Niu, Gianni Borghesan, Jenny Dankelman and Emmanuel Vander Poorten. Feasibility of using a Long Short-Term Memory Network for Robotic Catheter Control

back to top

Interactive 4 - Wednesday 30.09 - Novel Sensing and Navigating
11.00 - 11.08 Alessandro Casella, Sara Moccia, Filippo Piccinotti, Dario Paladini, Emanuele Frontoni, Elena De Momi and Leonardo S. Mattos. A Virtual Fetal Environment for TTTS Applications
11.09 - 11.16 Zhen Li, Andrea Pagliari, Nicolò Pasini, Chiara Quartana, Vittorio Zaccaria Pasolini, Jenny Dankelman and Elena De Momi. Continuous-curvature path planning for endovascular catheterization
11.17 - 11.24 Nikte Van Landeghem, Meere Malfait, Mouloud Ourak, Viktor Vörös and Emmanuel Vander Poorten. Towards Real-Time Eye Geometry Estimation using Eye Tracker sensor
11.25 - 11.32 Jilmen Quintiens, Jef De Smet and Emmanuel Vander Poorten. A Vision-Based Trocar Site Force Sensing Technique
11.33 - 11.40 Abu Bakar Dawood, Hareesh Godaba, Ahmad Ataka and Kaspar Althoefer. ML Based Soft Capacitive E-skin Sensor for Soft Surgical Robots
11.41 - 11.48 Sujit Kumar Sahu, Izadyar Tamadon, Benoit Rosa, Pierre Renaud and Arianna Menciassi. Development of a resistive-based sensor for real time shape detection of a spring based flexible manipulator

back to top

Keynote speakers


Cécile Huet

European Commission support to smart robots for healthcare: achievements and future perspective

The plenary lecture will cover a number of activities supported in the past, in particular through the Horizon 2020 funding programme, and will give some insights about the preparation of upcoming funding programmes, namely Horizon Europe (Research and innovation), and the digital Europe Programme (capacity building and deployment). It will also include some broader perspective on the relevant AI policy developments.

Cécile Huet is Deputy Head of the Unit "Robotics and Artificial Intelligence" at the European Commission. This unit funds and assists beneficial robotics and AI developments within Europe. The unit is in charge of one of the world's largest civilian programme in robotics with a budget of €700 million EU funding from Horizon 2020, supplemented by €2.1 billion from the European robotics industry in the context of the Robotics Public-Private Partnership. The unit is coordinating the preparation of the AI activities in the next funding programmes, Horizon Europe and Digital Europe programmes. Currently the unit is working with the stakeholders on the next Partnership on AI, data and Robotics, defining the strategic research, development, innovation and deployment for Europe in these fields. Moreover, this unit is at the heart of the Communication on Artificial Intelligence for Europe, the Coordinated Plan on Artificial Intelligence and the Communication on Building Trust in Human-Centric Artificial Intelligence.

Cécile joined the unit since its creation in 2004. Previously, she worked for the industry in signal processing after a post-doc at the University of California Santa Barbara and a PhD at University of Nice Sophia Antipolis.

In 2015, she has been selected as one of the "25 women in robotics you need to know about".



Daniela Rus

Robots, AI, Pandemics, and Surgery

The COVID-19 pandemic has highlighted the role of AI and robotics in healthcare. There are tremendous opportunities to support the medical profession with intelligent tools to better diagnose, monitor, treat, and prevent disease. I will describe our recent work on robotic emergency ventilation systems for patients and UVC disinfecting robots, and discuss future opportunities for incision-free surgeries enabled by miniaturized robotic pills.

Daniela Rus is the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science; Director of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Deputy Dean of Research for Schwarzman College of Computing at MIT. Prof. Rus brings deep expertise in robotics, artificial intelligence, data science, and computation. She is a member of the National Academy of Engineering, a member of the American Academy of Arts and Sciences, and fellow of the Association for the Advancement of Artificial Intelligence, the Institute of Electrical and Electronics Engineer, and the Association for Computing Machinery. She is also a recipient of a MacArthur Fellowship, a National Science Foundation Career award, and an Alfred P. Sloan Foundation fellowship. Rus earned her PhD in computer science from Cornell University.



Allison Okamura

Advancing Human Sensorimotor Control for Surgery

Robot-assisted surgery has a significant learning curve, because the human operators must adapt to kinematic mappings between human and robot, the dynamics of the manipulator and instruments, limited or altered sensory feedback, and learn the task itself. While performance metrics exist to certify users, it is generally unknown how to optimize training to speed up the learning and maximize the amount learned in skilled tasks. In this talk, I will demonstrate learning in novice and expert robot-assisted surgeons, discuss the mechanisms and roles of visual and haptic training, and propose methods for maximizing learning of complex instruments and tasks. Teleoperated robots enable visual and haptic perturbations, as well as detailed behavioral recordings, that facilitate the study of learning for a number of real-world skilled tasks not usually addressed in the neuroscience literature – ranging from surgery to driving a car.

Allison M. Okamura received the BS degree from the University of California at Berkeley and the MS and PhD degrees from Stanford University, all in mechanical engineering. She is currently Professor in the mechanical engineering department at Stanford University, with a courtesy appointment in computer science. She is an IEEE Fellow and Editor-in-Chief of the journal IEEE Robotics and Automation Letters. Her awards include the 2020 IEEE Engineering in Medicine and Biology Society Technical Achievement Award, 2019 IEEE Robotics and Automation Society Distinguished Service Award, and 2016 Duca Family University Fellow in Undergraduate Education. Her academic interests include haptics, teleoperation, virtual environments and simulators, medical robotics, neuromechanics and rehabilitation, and soft robotics. She is passionate about engineering education and diversifying STEM. Outside academia, she enjoys spending time with her husband and two children, running, and playing ice hockey. For more information about her research, please see the Collaborative Haptics and Robotics in Medicine (CHARM) Laboratory website:




Blake Hannaford

Introducing Automation to Surgical Robotics: Challenges and Frontiers

With great success of teleoperated surgery in the medical profession, the research field is now working to demonstrate the utility of selectively introducing automated assistance for surgical robotics. Concepts from AI which readily translate into surgical robotics (though not necessarily in medical use) include computer vision, 3D shape recovery, augmented reality registration, and advances in control systems and networked teleoperation. Other concepts such as classical planners may not fit as well when a highly skilled human is still in the loop.  Highly elongated, narrow, and lightweight structures of robotic surgical instruments are fundamentally limited in accuracy and precision - limitations that can surprise the engineer since human controllers so readily adapt. This talk will review some results from the lab on the determinants of precise positioning of robotic surgical instruments, as well as some ideas on possible AI frameworks needed to advance patient outcomes through selective automation.

Blake Hannaford, Ph.D., is Professor of Electrical Engineering, Adjunct Professor of Bioengineering, Mechanical Engineering, and Surgery at the University of Washington, in Seattle. 

Blake Hannaford received the B.S. degree in Engineering and Applied Science from Yale University in 1977, and the M.S. and Ph.D. degrees in Electrical Engineering from the University of California, Berkeley. From 1986 to 1989 he worked on the remote control of robot manipulators in the Man-Machine Systems Group in the Automated Systems Section of the NASA Jet Propulsion Laboratory, Caltech and supervised that group from 1988 to 1989. Since September 1989, he has been at the University of Washington in Seattle, where he is Professor of Electrical and Computer Engineering. He was awarded the National Science Foundation's Presidential Young Investigator Award, the Early Career Achievement Award from the IEEE Engineering in Medicine and Biology Society, and was named IEEE Fellow in 2005.  He was at Google-X / Google Life Sciences / Verily from April 2014 to December 2015. His currently active interests include  surgical robotics, surgical skill modeling, and haptic interfaces.


back to top