Plenary Talks
There are two plenary talks:
See below for more details. For the schedule, see the conference program.
Plenary Talk 1: Event Cameras
Davide Scaramuzza
Davide Scaramuzza is a Professor of Robotics and Perception at the University of Zurich, where he does research at the intersection of robotics, computer vision, and machine learning, and aims to enable autonomous, agile navigation of micro drones using both standard and neuromorphic event-based cameras. He pioneered autonomous, vision-based navigation of drones, which inspired the NASA Mars helicopter and has served as a consultant for the United Nations’s International Atomic Energy Agency’s (IAEA) Fukushima Action Plan on Nuclear Safety. For his research contributions, he won prestigious awards, such as a European Research Council (ERC) Consolidator Grant, the IEEE Robotics and Automation Society Early Career Award, and a Google Research Award. In 2015, he co-founded Zurich-Eye, today Facebook Zurich, which developed the world leading virtual-reality headset: Oculus Quest. Many aspects of his research have been prominently featured in wider media, such as The New York Times, BBC News, Forbes, Discovery Channel.
Abstract
Event cameras are bio-inspired vision sensors with microsecond latency and resolution, much larger dynamic range, and hundred times lower power consumption than standard cameras. This talk will present current trends and opportunities with event cameras and their challenges.
Plenary Talk 2: Challenges in Coupling of Sensor Data to Robot Control at the Example of Camera-based Sense-and-Avoid Systems with Machine Learning and Classical Processing
Darius Burschka
Darius Burschka received his PhD degree in Electrical and Computer Engineering in 1998 from the Technische Universitätt München in the field of vision-based navigation and map generation with binocular stereo systems. Later, he was a Postdoctoral Associate at Yale University, Connecticut, where he worked on laser-based map generation and landmark selection from video images for vision-based navigation systems. From 1999 to 2003, he was an Associate Research Scientist at the Johns Hopkins University, Baltimore, Maryland. From 2003 to 2005, he was an Assistant Research Professor in Computer Science at the Johns Hopkins University. Currently, he is a Professor in Computer Science at the Technische Universität München, Germany, where he heads the Machine Vision and Perception group. He was an area coordinator in the DFG Cluster of Excellence “Cognition in Technical Systems” , he is currently a co-chair of the IEEE RAS Technical Committee on Computer and Robot Vision and Science Board Member of the Munich School of Robotics and Machine Intelligence.
His areas of research are sensor systems for mobile and medical robots and human computer interfaces. The focus of his research is on vision-based navigation and three-dimensional reconstruction from sensor data. He is a Senior Member of IEEE.
Abstract
Many sensor data processing systems exchange the information with control units using three-dimensional representations, which is not native for camera-based systems and, therefore, requires fusion of information from multiple camera images including calibration parameters. This processing step requires additional information about the camera parameters and relative pose of the contributing images and introduces errors that may be critical in low-level protection systems, when the calibration parameters change due to vibrations or collisions. The sensor fusion system needs to address three problems to parametrize robot control: the system needs to be able to estimate not only the value but also provide information about the current quality of the estimate, it needs to provide long time convergence to compensate sensor drop-outs due to dynamic light changes, and it needs to provide fast sampling of the environment with low latency and high frame-rate to capture details of the observed motion.
I will present different approaches to address these requirements and discuss the problems with machine learning in control applications. I will present ways, how sensor data can be used directly for control without the necessity of metric reconstruction and motivate a better use of the temporal information in the images for image segmentation and recovery of actions and intentions in the surrounding environment.