Tutorials



Tutorial Speaker 1

  Prof. Ugur Tumerdem

  Marmara University, Johns Hopkins University











Title of the talk

Recovering the Sense of Touch in Robotic Surgery


Biography

Ugur Tumerdem received his B.Sc. degree in Mechatronics from Sabanci University, Istanbul in 2005, his M.Sc. and Ph.D. degrees in Integrated Design Engineering from Keio University, Tokyo in 2007 and 2010 respectively. Following his Ph.D., he was a Postdoctoral Fellow at IBM Research - Tokyo in 2011. Since 2012, he is a faculty member in the Department of Mechanical Engineering at Marmara University, Istanbul. He was a Fulbright Scholar and a Visiting Assistant Professor at the Department of Mechanical Engineering, Johns Hopkins University, MD, USA between 2022-2023, and is affiliated with the Laboratory for Computational Sensing and Robotics (LCSR) as an Adjunct Associate Professor. His current research interests include haptics in robotic surgery systems, with a focus on force estimation algorithms, haptic teleoperation architectures and machine learning for autonomous surgery.


Abstract

Robotic surgery has transformed minimally invasive procedures over the past 25 years. However, one of its most significant limitations remains the absence of haptic feedback, depriving surgeons of their sense of touch. While teleoperating robotic instruments, surgeons rely solely on visual feedback, which can lead to unintended complications. In this tutorial, I will discuss our recent and ongoing efforts to restore haptic feedback in robotic surgery through novel motion control laws, haptic teleoperation, and machine learning algorithms. One of the main challenges in achieving reliable haptic feedback is the difficulty of obtaining accurate force measurements or estimates from robotic laparoscopic instruments. Additionally, existing teleoperation architectures often lack the transparency required to ensure both stability and a natural feel for the operator. I will present our approaches to addressing these challenges, including our work on developing the first full-degree-of-freedom haptic feedback system for the da Vinci Research Kit (dVRK)- the open-source research version of the commercial da Vinci Surgical System.


Tutorial Speaker 2

  Dr. Cihan Acar

  Institute for Incomm Research (I2R) , A*STAR, Singapore











Title of the talk

Knowledge Distillation for Robotics: Enhancing Performance and Generalization


Biography

Cihan Acar received his B.A. in Engineering from Bilkent University, Turkey, in 2006 and his Master’s and Ph.D. in Integrated System Design Engineering from Keio University, Japan, in 2008 and 2011, respectively. He then pursued a postdoctoral research fellowship at The University of Auckland, New Zealand. Currently, he is a Senior Scientist at the Institute for Infocomm Research (I2R), A*STAR, Singapore. His research focuses on robotics, reinforcement learning, generative models, knowledge distillation, and motion planning, with an emphasis on developing learning-based solutions for autonomous systems and robotic manipulation.


Abstract

Knowledge Distillation (KD) offers a powerful solution for developing robust and generalizable robotic skills by transferring knowledge from a large, complex "teacher" model to a smaller, more efficient "student" model. KD has recently become a crucial technique for advancing learning-based solutions in robotics, particularly in areas such as autonomous driving, legged locomotion across diverse terrains, in-hand object manipulation, view-invariant visual policy learning, and the development of generalist policies. This approach enhances generalization capabilities, improves computational efficiency, and enables effective utilization of limited data, facilitating seamless transfer from simulation to real-world deployment. This tutorial provides a comprehensive overview of KD, its recent advancements, and its transformative impact on addressing key challenges in robotic learning and adaptation. It is designed for researchers, practitioners, and students interested in leveraging KD to improve robotic performance and expand the capabilities of autonomous systems.