|September 25||Tutorials Day|
|Room A||See below|
|13:00 – 15:30|
|16:00 – 18:30||
Live online: RGB-D Odometry and SLAM
Please note that tutorial 1 is at the same time as tutorial 2 and tutorial 3 at the same time as tutorial 4. Do not book two tutorials that run in parallel. Tutorial 5 and 6 are prerecorded and can thus be combined with any other tutorial.
- Tutorial 1: An Introduction to Non-linear State Estimation with Discrete Filters (Felix Govaers)
- Tutorial 2: From Closed Solutions of Inverse Dynamics to Actual Experimentation with a Trunk-Type Robot (Renaldas Urniezius)
- Tutorial 3: Robust Kalman Filtering (Florian Pfaff and Benjamin Noack)
- Tutorial 4: RGB-D Odometry and SLAM (Javier Civera)
- Tutorial 5: Emergent Universal Turing Machines in Developmental Networks:
Vision, Audition, Natural Languages, APFGP and ImitationRobust Kalman Filtering (Juyang Weng)
- Tutorial 6: Analytic Combinatorics for Multi-Object Tracking (Roy L. Streit, R. Blair Angle and Murat Efe)
Intended audience and pre-requisites
The intended audience are engineers, PhD students, or interested people who are working in the field of sensor data fusion. The algorithmic and theoretical background of discrete state spaces and tensor decompositions should be of interest for the audience. Problems, questions and specific interests are welcome for an open discussion.
Participants should have some background knowledge on basic operations in stochastic theory and linear algebra.
The increasing trend towards connected sensors (“internet of things” and “ubiquitous computing”) derive a demand for powerful non-linear estimation methodologies. Conventionally, algorithmic solutions in the field of Bayesian data fusion and target tracking are based on either a Gaussian (mixture) or a particle representation of the prior and posterior density functions (pdf). The discrete filters reduce the state space to a fixed grid and represents the pdf in terms of an array of function values in high to extraordinary high dimensions. Due to the “curse of dimensionality”, data compression techniques such as tensor decompositions have to be applied. In this tutorial, the basic methods for a Bayes formalism in discrete state spaces is explained. Possible solutions to the tensor
decomposition (and composition) process are presented. Algorithms will be provided for each solution. The list of topics includes: Short introduction to target tracking and non-linear state estimation, discrete pdfs, Bayes recursion on those, PARAFAC/CANDECOMP Decomposition (CPD), Tucker and Hierarchical Tucker decomposition.
Felix Govaers received his Diploma in Mathematics and his PHD with the title “Advanced data fusion in distributed sensor applications” in Computer Science, both at the University of Bonn, Germany. Since 2009 he works at Fraunhofer FKIE in the department for Sensor Data Fusion and Information Processing where he was leading the research group “Distributed Systems” for three years. Since 2017 he is the deputy head of the department “Sensordata and Information Fusion”. The research of Felix Govaers is focused on data fusion for state estimation in sensor networks and non-linear filtering. This includes tensor decomposition based filtering, track-extraction, processing of delayed measurements as well as the Distributed Kalman filter and track-to-track fusion.
From Closed Solutions of Inverse Dynamics to Actual Experimentation with a Trunk-Type Robot (Renaldas Urniezius)
Intended audience and pre-requisites
The intended audience is anyone interested in collaborative robots and their control principles.
Prerequisites are any previous experience or competence with cobots or their electric drives and sensors. However, potential applications, existing open-source platforms, design, and regulatory interests are also welcome.
In this course, we will concentrate on the inverse dynamics approach where we will use our grey-box model, linked to quaternions, to control the trunk-type robots. The scenarios will include time and spatial boundary conditions, and nonholonomic robot constraints. We will discuss obstacle avoidance strategies when planning inverse dynamics. The presenter will share his team’s experience on actual instrumentation and how practical synchronized motion profiles really work in real life when motor-related constraints bring a higher level of complexity. Practical aspects of controller and direct drives for such actuators will be discussed.
Renaldas Urniezius received a Ph.D. in Electronics Engineering from Kaunas University of Technology (KTU). He serves as a Professor in the Department of Automation, KTU, and a senior member of IEEE, Robotics and Signal Processing societies. His research includes direct drives, robotics, and bio-engineering applications. University lectures deal with proactive optimal control to infer information faster. Scientific interests include the Pontryagin principle, sensor fusion, and vision analysis applications, synthesis and research in foundations of inference and machine learning methods, variational programming, grey box variance-free approaches.
Intended audience and pre-requisites
The intended audience are users and researchers of stochastic filtering dealing with uncertainties that are not purely stochastic, such as discretization uncertainty and set-membership constraints, or are dealing with negative information. The presented approaches will not only help them understand a more general way to model their systems but can also help them reduce non-linearity of their system and measurement models. Attendants must be familiar with the Kalman filter to take full advantage of this tutorial.
The optimality of the Kalman filter does not only depend on an accurate, linear model but also on perfectly known parameters of the prior and noise distributions. This requirement is not special to the Kalman filter but is rather an inherent problem deeply rooted into Bayesian filtering and, in parts, also frequentist statistics. The attendants will learn how this problem can be overcome by using hybrid approaches that rely on a combination of stochastic and set-membership methods. The approach is thoroughly explained along with solutions to new challenges arising. Furthermore, using the example of event-based estimation, the attendants will learn how these versatile approaches not only help to improve our modeling of the true uncertainty but also help to make use of the absence of information.
Florian Pfaff is a postdoctoral researcher at the Intelligent Sensor-Actuator-Systems Laboratory at the Karlsruhe Institute of Technology. He obtained his diploma in 2013 and his Ph.D. in 2018, both with the highest distinction. His research interests include a variety of estimation problems such as filtering on nonlinear manifolds, multitarget tracking, and estimation in the presence of both stochastic and non-stochastic uncertainties.
Benjamin Noack is professor at the Otto von Guericke University Magdeburg and head of the Autonomous Multisensor Systems group. His research interests are in the areas of multi-sensor data fusion, distributed and decentralized Kalman filtering, combined stochastic and set-membership approaches to state estimation, and event-based systems.
Intended audience and pre-requisites
The tutorial targets newcomers and professionals with interest in RGB-D odometry and SLAM. Specific knowledge on computer vision or image processing is not required, the tutorial will cover the necessary background. Basics of Algebra, Calculus and Probability at bachelor level are the only pre-requisites.
The emergence of modern RGB-D sensors, combining photometric and depth information, had a significant impact in many application fields, including robotics, augmented reality (AR) and 3D scanning. They are low-cost, low-power and low-size alternatives to traditional range sensors such as LiDAR. Unlike RGB cameras, RGB-D sensors provide direct depth information, removing the need of frame-by-frame triangulation for 3D scene reconstruction. These merits have made them very popular in mobile robotics and AR, where it is of great interest to estimate egomotion (odometry) and 3D scene structure. Such spatial understanding can enable robots to navigate autonomously and allow AR users to insert virtual entities consistent with the image stream. In this tutorial, we will review common formulations of odometry and Simultaneous Localization and Mapping (known by its acronym SLAM) using RGB-D as input. The two topics are closely related, as the former aims to track the incremental camera motion with respect to a local map of the scene, and the latter to jointly estimate the camera trajectory and the global map with consistency. In both cases, the standard approaches minimize a cost function using nonlinear optimization techniques. We will cover mainly three aspects: In the first part, we will introduce the basic concepts of odometry and SLAM and motivate the use of RGB-D sensors. We will also give mathematical preliminaries relevant to most odometry and SLAM algorithms. In the second part, we will detail the three main components of SLAM systems: camera pose tracking, scene mapping and loop closing. For each component, we will describe the most relevant approaches in the literature. In the final part, we will provide a brief discussion on the expected performance and limitations of current algorithms, and will review advanced research topics with the references to the state-of-the-art.
Javier Civera was born in Barcelona, Spain, in 1980. He received his Industrial Engineering degree in 2004 and his Ph.D. degree in 2009, both from the University of Zaragoza in Spain. He is currently an Associate Professor at the University of Zaragoza, where he teaches courses in computer vision, machine learning and control engineering. He has also participated as invited speaker in several workshops at IEEE ICRA and IEEE/RSJ IROS, tutorials at MFI 2020, ITSNT 2017 and Robotic Vision Summer School 2017, and invited lecturer at Universidad de Buenos Aires (Argentina) and Université Picardie Jules Verne (France). He has participated in and led several EU-funded, national and technology transfer projects related with computer vision and robotics and has been funded for research visits to Imperial College (London) and ETH (Zürich). He has co-authored more than 50 publications in top conferences and journals, receiving more than 5,100 references (GoogleScholar). He has served as Associate Editor at IEEE T-ASE, IEEE RA-L, IEEE ICRA and IEEE/RSJ IROS. Currently, his research interests are in the use of multi-view geometry and machine learning to produce robust and real-time visual SLAM technologies for robotics, wearables and AR applications
Emergent Universal Turing Machines in Developmental Networks:
Vision, Audition, Natural Languages, APFGP and Imitation (Juyang Weng)
Intended audience and pre-requisites
Professors, industrial researchers, practitioners, post doctorial researchers, graduate students, AI writers,
news reporters, government AI policy makers, AI philosophers, and AI fans.
Multisensor Fusion and Integration for robots needs a general-purpose theory and experimental studies also require such a theory. Finite automata (a.k.a. finite-state machines) have been taught in almost all electrical engineering programs. However, Turing machines, especially universal Turing machines (UTM), have not been taught in many electrical engineering programs and were dropped in many computer science and engineering programs as a required course. This resulted in a major knowledge weakness in many people working on neural networks for intelligent robots. Without knowing UTM, researchers have considered neural networks as merely general-purpose function approximators instead of general-purpose computers. This tutorial first briefly explains what a Turing machine is, what a UTM is, why a UTM is a general-purpose computer, and why traditional Turing machines and UTMs are all symbolic and handcrafted for a specific task. In contrast, strong AI by an intelligent robot must program itself through lifetime, instead of being programmed for a specific task. The Developmental Network (DN) by Weng et al. is a new kind of neural network that avoided the controversial post selection of deep-learning networks after they have been trained. A DN learns to become a general-purpose computer by learning an emergent UTM directly from the physical world, like a human child does. Because of this fundamental capability, a UTM inside a DN emerges autonomously on the fly, realizing APFGP (Autonomous Programming For General Purposes), conscious learning and autonomous imitation (from observing demonstrations). The well-known three bottleneck problems in AI, vision, audition, and natural language understanding are experimented with as early APFGP to be presented in the tutorial.
Juyang Weng: Professor at the Department of Computer Science and Engineering, the Cognitive Science Program, and the Neuroscience Program, Michigan State University, East Lansing, Michigan, USA. He is also a visiting professor at Fudan University, Shanghai, China. He received his BS degree from Fudan University in 1982, his MS and PhD degrees from University of Illinois at Urbana-Champaign, 1985 and 1989, respectively, all in Computer Science. From August 2006 to May 2007, he is also a visiting professor at the Department of Brain and Cognitive Science of MIT. His research interests include computational biology, computational neuroscience, computational developmental psychology, biologically inspired systems, computer vision, audition, touch, behaviors, and intelligent robots. He is the author or coauthor of over 250 research articles. He is an editor-in-chief of the International Journal of Humanoid Robotics and an associate editor of the IEEE Transactions on Autonomous Mental Development. He has chaired and co-chaired some conferences, including the NSF/DARPA funded Workshop on Development and Learning 2000 (1st ICDL), 2nd ICDL (2002), 7th ICDL (2008), 8th ICDL (2009), and INNS NNN 2008. He was the Chairman of the Governing Board of the International Conferences on Development and Learning (ICDLs) (2005-2007), chairman of the Autonomous Mental Development Technical Committee of the IEEE Computational Intelligence Society (2004-2005), an associate editor of IEEE Trans. On Pattern Recognition and Machine Intelligence, an associate editor of IEEE Trans. on Image Processing. He was the General Chair of AIML Contest 2016 and taught BMI 831, BMI 861 and BMI 871 that prepared the contestants for the AIML Contest session in IJCNN 2017 in Alaska. AIML Contests have run annually since 2016. He presented 40 conference tutorials, including one during ICDL-EpiRob 2020. He is a Fellow of IEEE. Web: http://www.cse.msu.edu/~weng/
The intended audience is any engineer, Ph.D. student, and interested person working in multi-object tracking and data fusion. The development should be of special interest to individuals working in what is often called random finite sets (or finite point processes), and those working on large problems requiring principled approximations. Open discussion of problems and specific interests are welcome.
First course in probability and/or signal processing.
Exact solutions of many problems in tracking have high computational complexity and are impractical for all but the smallest of problems. Practical implementations entail approximation. There is a bewildering variety of established trackers available and practicing engineers and/or researchers often study them almost in isolation of each other without fully understanding what these trackers are about and how they are inter-related. One reason for this is that these filters have different combinatorial problems which are approached by explicitly enumerating the feasible solutions. The enumeration is usually a highly detailed, hard to understand accounting scheme specific to the filter and the details cloud understanding the filter and make it hard to compare different filters. On the other hand, the analytic combinatoric approach presented in this tutorial avoids the heavy accounting burden and provides a solid tool to work with. This tool is the derivative of multivariate calculus, which all engineers easily understand.
This tutorial is designed to facilitate understanding of the classical theory of Analytic Combinatorics (AC) and how to apply it to problems in multi-object tracking. AC is an economical technique for encoding combinatorial problems—without information loss—into the derivatives of a generating function (GF). Exact Bayesian filters derived from the GF avoid the heavy accounting burden required by traditional enumeration methods. Although AC is an established mathematical field, it is not widely known in either the academic engineering community or the practicing data fusion/tracking community. This tutorial lays the groundwork for understanding the methods of AC, starting with the GF for the classical Bayes-Markov filter. From this cornerstone, we derive many established filters (e.g., PDA, JPDA, JIPDA, PHD, CPHD, MultiBernoulli, MHT) with simplicity, economy, and insight. We also show how to use the saddle point method (method of stationary phase) to find low complexity approximations of probability distributions and summary statistics.
Roy Streit Senior Scientist, Metron, Reston, Virginia, and Professor (Adjunct) of Electrical and Computer Engineering, University of Massachusetts–Dartmouth. IEEE Fellow. IEEE AESS Board of Governors, 2016-18. President, ISIF, 2012. Research interests include multi-target tracking, multi-sensor data fusion, medical imaging, signal processing, pharmacovigilance, and business analytics. Author, Poisson Point Processes, Springer, 2010 (Chinese translation, Science Press, 2013). Co-author, Bayesian Multiple Target Tracking, 2nd Edition, Artech, 2014. Seven US patents. He is the co-author of the book entitled Analytic Combinatorics for Multiple Object Tracking, Springer, scheduled to be published in December 2020.
Blair Angle is a senior research scientist at Metron, Inc. Since joining Metron in 2008, he has worked as the technical lead on a variety of projects involving mathematical and statistical modeling, machine learning, tracking, simulation, signal processing, and software development. During his tenure at Metron, he has written or co-written several proposals for DARPA, ONR, etc. which have led to new Metron funding and research. His current research involves multiple-object tracking, with a focus on applying analytic combinatorial (AC) methods to data association problems. Along with Dr. Roy Streit, he recently developed and implemented a working version of the Multisensor JiFi (JPDA intensity Filter), a multisensor, multiobject tracking filter for extended objects. He is the co-author of the book entitled Analytic Combinatorics for Multiple Object Tracking, Springer, scheduled to be published in December 2020.
Murat Efe Senior IEEE, Professor and Head of the Electrical and Electronics Engineering Department at Ankara University. Numerous papers in refereed journals, conferences, and seminars on target tracking/data fusion. He is an Associate Editor for IEEE Transactions on Aerospace and Electronic Systems and was one of the lecturers for the NATO-CSO Lecture Series entitled “Radar and SAR Systems for Airborne and Space-based Surveillance and Reconnaissance” between 2013-2017 where a total of 13 countries, namely Italy, UK, France, Spain, Germany, Romania, US, Canada, Portugal, Lithuania, Bulgaria, Poland and Australia were visited for these lectures. Dr. Efe is a technical consultant to a number of defense companies on tracking and fusion related projects. Also he served on the executive board of the Electrical, Electronics and Informatics Research Group of the Scientific and Technological Research Council of Turkey. Dr. Efe is a member of Board of Directors of ISIF for the term 2014-2016 and again for the term 2017-2019 and 2020-2023. He is the co-author of the book entitled Analytic Combinatorics for Multiple Object Tracking, Springer, scheduled to be published in December 2020.