The workshop will focus on a new class of sensors where a visual sensor (camera, depth camera, etc.) is used in order to perceive the contact state between a robot and the environment (i.e. to “see touch”). Considering the high-resolution pixel-based tactile information provided by these sensors, we wish to explore how they can be used towards robust manipulation via both model-based and learning-based approaches, consider what kinds of physical measurements can be performed or inferred from visuotactile information, and discuss what potential improvements should be made to current sensors and techniques associated with them.

We invite you to contribute and to participate in this workshop.

The workshop's topics include, but are not limited to:
Confirmed Speakers:

Workshop Schedule



Ted Adelson

Ted Adelson is well known for contributions to multiscale image representation (such as the Laplacian pyramid) and basic concepts in early vision such as motion energy and steerable filters (honored by the IEEE Computer Society’s Helmholtz Prize, 2013). His work on the neural mechanisms of motion perception was honored with the Rank Prize in Optoelectronics (1992). His work on layered representations for motion won the IEEE Computer Society’s Longuet-Higgins Award (2005). He introduced the plenoptic function, and built the first plenoptic camera. He has done pioneering work on the problems of material perception in human and machine vision. He has produced some well known illusions such as the Checker-Shadow Illusion. Prof. Adelson has recently developed a novel technology for artificial touch sensing, called GelSight, which converts touch to images, and which enables robots to have tactile sensitivity exceeding that of human skin

Christopher G. Atkeson

I am a Professor in the Robotics Institute and Human-Computer Interaction Institute at Carnegie Mellon University. My life goal is to fulfill the science fiction vision of machines that achieve human levels of competence in perceiving, thinking, and acting. A more narrow technical goal is to understand how to get machines to generate and perceive human behavior. I use two complementary approaches, exploring humanoid robotics and human aware environments. Building humanoid robots tests our understanding of how to generate human-like behavior, and exposes the gaps and failures in current approaches.


Tapomayukh Bannerjee


Roberto Calandra

Roberto Calandra is a Research Scientist at Facebook AI Research. Previously, he was a Postdoctoral Scholar at the University of California, Berkeley (US) in the Berkeley Artificial Intelligence Research Laboratory (BAIR) working with Sergey Levine. His education includes a Ph.D. from TU Darmstadt (Germany) under the supervision of Jan Peters and Marc Deisenroth, a M.Sc. in Machine Learning and Data Mining from the Aalto university (Finland), and a B.Sc. in Computer Science from the Università degli studi di Palermo (Italy).

Robert Haschke

Robert Haschke received the diploma and PhD in Computer Science from the University of Bielefeld, Germany, in 1999 and 2004, working on the theoretical analysis of oscillating recurrent neural networks. Since then, his work focuses more on robotics, still employing neural methods whereever possible. Robert is currently heading the Robotics Group within the Neuroinformatics Group, striving to enrich the dexterous manipulation skills of our two bimanual robot setups through interactive learning. His fields of research include neural networks, cognitive bimanual robotics, grasping and manipulation with multi-fingered dexterous hands, tactile sensing, and software integration.

Alberto Rodriquez

Alberto Rodriguez is an Associate Professor (without tenure) at the Mechanical Engineering Department at MIT. Alberto graduated in Mathematics ('05) and Telecommunication Engineering ('06) from the Universitat Politecnica de Catalunya, and earned his PhD (’13) from the Robotics Institute at Carnegie Mellon University. He leads the Manipulation and Mechanisms Lab at MIT (MCube) researching autonomous dexterous manipulation, robot automation, and end-effector design. Alberto has received Best Paper Awards at conferences RSS’11, ICRA’13, RSS’18, IROS'18, and RSS'19, and has been finalist for best paper awards at conferences IROS’16 and IROS'18. He has lead Team MIT-Princeton in the Amazon Robotics Challenge between 2015 and 2017, and has received the Amazon Research Award in 2018 and 2019, and the 2018 Best Manipulation System Paper Award from Amazon.

Kazuhiro Shimonomura

Kazuhiro Shimonomura is a Professor in Department of Robotics, Ritsumeikan University, Shiga, Japan. His current research interests include "Aerial robotics", "Tactile image sensing", and "Ultra-high speed imaging".


Alex Alspach

Alex designs and builds soft systems for sensing and manipulation at Toyota Research Institute (TRI). He earned his master's degree at Drexel University with time spent in the Drexel Autonomous Systems Lab (DASL) and KAIST's HuboLab. After graduating, Alex spent two years at SimLab in Korea developing and marketing tools for manipulation research. While there, he also worked with a production company to develop artists' tools for animating complex, synchronized industrial robot motions. Prior to joining TRI, Alex developed soft huggable robots and various other systems at Disney Research with Joohyung and Katsu!

Naveen Kuppuswamy

Naveen Kuppuswamy is a senior research scientist at the Toyota Research Institute (TRI). His current research interests are on tactile perception and control for manipulation and soft robotics.

Avinash Uttamchandani

Avinash Uttamchandani is an electrical engineer working on manipulation research at the Toyota Research Institute, focusing on tactile sensing, embedded electronics, and real-time signal processing and controls.

Filipe Veiga

Filipe Veiga is a Postdoctoral Associate at the Computer Science & Artificial Intelligence Lab at the Massachusetts Institute of Technology. His research focuses on exploring how the sense of touch can be used to improve the dexterous manipulation skills of robots.

Wenzhen Yuan

Wenzhen Yuan is an assistant professor at the Robotics Institute, Carnegie Mellon University. Her research is on developing high-resolution tactile sensors, and applying them for robot manipulation and perception.

Related Links