Mehmet Dogar University of Leeds (School of Computing)
Farbod Farshidian ETH Zurich (Robotic Systems Lab)
Vassilios Tsounis ETH Zurich (Robotic Systems Lab)
Prof. Yannis Smaragdakis University of Athens (Dept. of Informatics)
Prof. Eftratios Gavves University of Amsterdam (QUVA Deep Vision Lab)
Prof. Kostas Alexis Norwegian University of Science and Technology (Department of Engineering Cybernetics)
Prof. Cristina Piazza Technical University of Munich (TUM)
Dr. Alperen Acemoglu Italian Institute of Technology (Advanced Robotics Lab)
Dr. Michael Everett MIT (Aerospace Controls Lab)
Dr. Justin Yim Carnegie Mellon University (Robomechanics Lab)
Dr. Joao Bimbo Yale University (The GRAB lab)
Abstract: I will give an overview of our work on robotic object manipulation. First, I will talk about physics-based planning. This refers to robot motion planners that use predictions about the motion of contacted objects. We have particularly been interested in developing such planners for cluttered scenes, where multiple objects might simultaneously move as a result of robot contact. Second, and as much as I have time, I will talk about a more conventional grasping-based problem that we have recently been working on, where a robot must manipulate an object for the application of external forceful operations on it. Imagine a robot holding and moving a wooden board for you, while you drill holes into the board and cut parts of it. I will describe our efforts in developing a planner that addresses the geometric, force stability, and human-comfort constraints for such a system.
Mehmet Dogar Object manipulation with physics-based models, University of Leeds (School of Computing)
I am an Associate Professor at the School of Computing, University of Leeds, UK. My research focuses on robotic object manipulation, and I lead a group of researchers at Leeds on this topic. I am a Fellow of the EPSRC and the Alan Turing Institute. I am an Area Chair for the RSS conference, and an Associate Editor for the IEEE Robotics and Automation - Letters (RA-L). Previously I was a postdoctoral researcher at CSAIL, MIT. I received my PhD in 2013 from the Robotics Institute at CMU.
Abstract: Reinforcement learning can produce robust feedback policies for the motion control of robots in challenging environments. However, it can take significant tuning and the use of many shaping rewards to create desirable behavior. Furthermore, it can be challenging for learning algorithms to discover exact movements over long time horizons via random exploration. On the other hand, reward shaping is less of an issue for model-based planning methods such as Model Predictive Control (MPC) as constraints can easily be added and tuned relatively quickly. However, they still heavily rely on the formulation's differentiability and have difficulty dealing with sparse and non-continuous reward/cost signals. In this talk, I will present some of our recent work in combining model and learning-based approaches in order to exploit the knowledge of the system dynamics while effectively exploring the environment. I will show how combining ideas from these two frameworks will result in more general algorithms which can be applied to a larger problem set. In particular, I will show how learning a cost function of an MPC and distilling its output in a neural network allows us to use the MPC on problems with more general cost [2] and to reduce the deployment cost [3]. I will further show how adding constraints to a sampling-based method can significantly boost its sample efficiency to be deployed in online applications such as locomotion control [4]. Finally, I present a pipeline to combine learning-based policy with model-based manipulation for mobile manipulators that can adapt to various manipulator configurations and tasks. Further Material: [1] DeepGait: https://arxiv.org/abs/1909.08399 [2] Deep Value MPC: https://arxiv.org/abs/1910.03358 [3] MPCNet: https://arxiv.org/abs/1909.05197 [4] PISOC: https://journals.sagepub.com/doi/pdf/10.1177/02783649211047890 [5] DeepGait: Planning and Control of Quadrupedal Gaits using Deep Reinforcement Learning (Presentation) [https://youtu.be/jIKhnWzcdbg] [6] Combining Learning-based Locomotion with Model-based Manipulation for Legged Mobile Manipulators [https://youtu.be/pIAP1Cx3Nu0] [7] OCS2: An open source library for Optimal Control of Switched Systems [https://github.com/leggedrobotics/ocs2]
Farbod Farshidian Combining Model-based and Learning-based Frameworks for Motion Planning and Control of Robotic Systems, ETH Zurich (Robotic Systems Lab)
Dr. Farbod Farshidian (he/him) is a senior scientist at Robotic System Lab, ETH Zurich. He received his MSc. in electrical engineering from the University of Tehran, Iran, in 2012 and his Ph.D. from ETH Zurich, Switzerland, in 2017 on motion planning and control of legged systems. His research focuses on mobile robots’ motion planning and control, intending to develop algorithms and techniques to enable these robotic platforms to operate autonomously in real-world applications. His expertise covers optimization-based control, reinforcement learning, motion planning of legged systems, and mobile manipulators.
Abstract: I will present our work on perceptive motion-planning and control for legged locomotion in non-flat terrain, and which combines deep Reinforcement Learning (RL), Trajectory Optimization (TO), and nonlinear Model-Predictive Control (MPC). We aptly refer to this work as DeepGait [1], and it is centered on the training neural-network policies for terrain-aware foothold selection and CoM motion generation using RL. However, instead of crafting the MDP with transition dynamics using physical simulation, as is typical in this setting, we instead employ a TO solver to evaluate the feasibility of individual transitions. Doing so allows us to train kinodynamic policies that account for both terrain geometry as well as the dynamic capabilities of the robot but not at the high sample complexity of using temporally dense and computationally costly physics simulations. Moreover, this formulation also allows us to use both fixed-gait and gait-free variants. Once a GP has been trained, it can be employed either together with RL to train lower-level motion control policies or can provide motion references to a non-linear MPC/ Whole-body control tracking controller. Further Material: [1] DeepGait: https://arxiv.org/abs/1909.08399 [2] Deep Value MPC: https://arxiv.org/abs/1910.03358 [3] MPCNet: https://arxiv.org/abs/1909.05197 [4] PISOC: https://journals.sagepub.com/doi/pdf/10.1177/02783649211047890 [5] DeepGait: Planning and Control of Quadrupedal Gaits using Deep Reinforcement Learning (Presentation) [https://youtu.be/jIKhnWzcdbg] [6] Combining Learning-based Locomotion with Model-based Manipulation for Legged Mobile Manipulators [https://youtu.be/pIAP1Cx3Nu0] [7] OCS2: An open source library for Optimal Control of Switched Systems [https://github.com/leggedrobotics/ocs2]
Vassilios Tsounis DeepGait: Planning and Control of Quadrupedal Gaits using Deep Reinforcement Learning, ETH Zurich (Robotic Systems Lab)
Vassilios Tsounis (he/him) received a Diploma degree in electrical and computer engineering from the National Technical University of Athens, Greece, in 2014. After a brief internship at the European Space Agency and working in the industry over 2014-15, he joined the Robotic Systems Lab, at ETH Zürich, Switzerland, as a Research Assistant and later a Doctoral Researcher. His research interests include machine learning and control as applied to the perception and locomotion of quadrupedal robots.
Abstract: When modelling estimation or control problems in robotics, we usually deal with different representations and conventions. Physical representations are used to describe how the robot moves, which is supported by geometric insights that are used to consequently define a mathematical problem using algebraic expressions derived from probabilistic formulations. However, it is quite common to consider each step as an independent design process, which hides some important relationships between these steps. In this talk we will review how all these ideas are connected with some simple state estimation examples. By understanding the manifold nature of the variables we work with in robotics, we can derive principled techniques to manipulate deterministic and probabilistic quantities, as well as general algorithms for estimation and control. Further Material: GTSAM blog posts (https://gtsam.org/2021/02/23/uncertainties-part1.html)
Matias Mattamala On physical, algebraic, geometric and probabilistic descriptions in robotics, University of Oxford (Oxford Robotics Institute)
Matias Mattamala completed a B.Sc, Ing. Civil and M.Sc in Electrical engineering at the Universidad de Chile. He developed a visual-proprioceptive SLAM system for humanoids for his master degree, and was a member of the UChile RoboCup SPL team. Currently he is a PhD student at the Dynamic Robot Systems (DRS) group at the Oxford Robotics Institute (ORI) supervised by Prof. Maurice Fallon. He is working toward the development of long-term visual navigation systems for dynamic quadruped robots. His research interests include computer vision and state estimation, as well as mathematical methods and geometry in robotics.
A recorded talk from SPLASH 2019.
Prof. Yannis Smaragdakis Why Do a PhD and How to Pick an Area, University of Athens (Dept. of Informatics)
Yannis Smaragdakis (born 1972 in Athens, Greece) is a Greek American computer scientist and Professor at the National and Kapodistrian University of Athens as well as an Adjunct Professor at the University of Massachusetts Amherst. He is known for his work in Software engineering and Programming languages. In software engineering, he is noted for his invention of the concept of Mixin Layers in his PhD Thesis[4] and his formation of Yannis's Law of Programmer Productivity which, by analogy to Moore's law, posits that Programmer productivity doubles every 6 years. In programming languages, he is noted for his work in Pointer analysis and serving Object-Oriented Programming, Systems, Languages & Applications (OOPSLA) as Program Chair in 2016 and Conference Chair in 2019.
Visual artificial intelligence automatically interprets what happens in visual data like videos. Today’s research strives with queries like: “Is this person playing basketball?”; “Find the location of the brain stroke”; or “Track the glacier fractures in satellite footage”. All these queries are about visual observations already taken place. Today’s algorithms focus on explaining past visual observations. Naturally, not all queries are about the past: “Will this person draw something in or out of their pocket?”; “Where will the tumour be in 5 seconds given breathing patterns and moving organs?”; or, “How will the glacier fracture given the current motion and melting patterns?”. For these queries and all others, the next generation of visual algorithms must expect what happens next given past visual observations. Visual artificial intelligence must also be able to prevent before the fact, rather than explain only after it. In this talk, I will present my vision on what these algorithms should look like, and investigate possible synergies with other fields of science, like biomedical research, astronomy and others. Furthermore, I will present some recent works and applications in this direction within my lab and spinoff.
Prof. Eftratios Gavves The Machine Learning of Time: Past and Future, University of Amsterdam (QUVA Deep Vision Lab)
Dr. Efstratios Gavves is an Associate Professor with the University of Amsterdam in the Netherlands, Scientific Director of the QUVA Deep Vision Lab, and an ELLIS Scholar. He is a recipient of the ERC Career Starting Grant 2020 and NWO VIDI grant 2020 to research on the Computational Learning of Temporality for spatiotemporal sequences. Also, he is a co-founder of Ellogon.AI, a University spinoff and in collaboration with the Dutch Cancer Institute (NKI), with the mission of using AI for pathology and genomics. Efstratios has authored several papers in the top Computer Vision and Machine Learning conferences and journals and he is also the author of several patents. His research focus is on Temporal Machine Learning and Dynamics, Efficient Computer Vision, and Machine Learning for Oncology.
This talk will present our contributions in the domain of field-hardened resilient robotic autonomy and specifically on multi-modal sensing-degraded GPS-denied localization and mapping, informative path planning, and robust control to facilitate reliable access, exploration, mapping and search of challenging environments such as subterranean settings. The presented work will, among others, emphasize on fundamental developments taking place in the framework of the DARPA Subterranean Challenge and the research of the CERBERUS (https://www.subt-cerberus.org/) team, alongside work on nuclear site characterization and infrastructure inspection. Relevant field results from both active and abandoned underground mines as well as tunnels in the U.S. and in Switzerland will be presented. In addition, a selected set of prior works on long-term autonomy, including the world-record on unmanned aircraft endurance will be briefly overviewed. The talk will conclude with directions for future research to enable advanced autonomy and resilience, alongside the necessary connection to education and the potential for major broader impacts to the benefit of our economy and society.
Prof. Kostas Alexis Field-hardened Resilient Robotic Autonomy, Norwegian University of Science and Technology (Department of Engineering Cybernetics)
Kostas Alexis obtained his Ph.D. in the field of aerial robotics control and collaboration from the University of Patras, Greece in 2011. His Ph.D. research was supported by the Greek National-European Commission Excellence scholarship. After successfully defending his Ph.D. thesis, he was awarded a Swiss Government fellowship and moved to Switzerland and ETH Zurich. From 2012 to June 2015 he held the position of Senior Researcher at the Autonomous Systems Lab of ETH Zurich, leading the lab efforts in the fields of control and path planning for advanced navigational and operational autonomy. During summer 2015 he moved to the Computer Science & Engineering Department of the University of Nevada, Reno where he got tenured in 2020. Since Fall 2020 he moved to the Department of Engineering Cybernetics at the Norwegian University of Science and Technology as a Full Professor. He is the founder and director of the Autonomous Robots Lab (https://www.autonomousrobotslab.com/) involving more than 15 researchers and conducting research in the domain of autonomy, perception, planning and control. Dr. Alexis' research has received multiple awards, includes the world record in unmanned aircraft endurance, and has been funded by a variety of sources including DARPA, NSF, DOE, USDA, NASA, the European Commission, the Norwegian Research Council the private sector and other sources.
Since the 16th century, science and engineering have endeavored to match the richness and complexity of the human hand sensory-motor system. In the last decade novel theories and technologies, e.g. soft robotics and the simplification of the mechanical design, suggest a new promising direction towards the next generation of high technologic bionic aids. This talk aims to exploit the potential of these emerging trends and proposes new strategies to optimise the performance of an artificial hand, achieving a useful trade-off between grasping performance and mechanical design/control complexity.
Prof. Cristina Piazza New Perspectives in the Design and Control of Bionic Limb, Technical University of Munich (TUM)
Cristina Piazza is currently Professor for Healthcare and Rehabilitation Robotics at the Technical University of Munich (TUM). She received her PhD in Robotics at University of Pisa, Italy before moving to to Chicago (USA) where she worked as a Postdoctoral Researcher at the Department of Physical Medicine and Rehabilitation, Northwestern University and the Regenstein Foundation Center for Bionic Medicine, Shirley Ryan AbilityLab (former Rehabilitation Institute of Chicago). Her main research interests include the design and control of soft artificial limbs for robotic and prosthetic applications. She has also experience in designing and conducting clinical trials with amputee subjects.
Surgical robots and their control consols are being developed to perform telesurgeries for more than 30 years. The first telesurgery involving a human patient was performed in 2001, known as Lindbergh operation. However, the reproduction of this feat and popularization of the telesurgery was impossible due to the limited availability of surgical robots and the lack of fast and reliable network connections. Now, with the introduction of 5G mobile networks and with the new surgical robots, this concept is becoming practical. Recently, we performed an experiment of a remote 5G robotic laser microsurgery in Milano, Italy. In our experiment, surgeons successfully performed complex transoral laser microsurgeries on the vocal cords of an adult human cadaver located 15 km away from them. Our results demonstrate that surgical expertise can be exploited and shared efficiently using the new 5G telecommunication standard. During the talk, we will be discussing this 5G telesurgery experience, in detail the telesurgery setup, the integrated robotic systems, results of the experiments and current limitations/future directions.
Dr. Alperen Acemoglu 5G Telesurgery, Italian Institute of Technology (Advanced Robotics Lab)
Alperen Acemoglu received his B.Sc. degree in Mechanical Engineering from Istanbul Technical University, Turkey, in 2012 and M.Sc. degree in Mechatronics Engineering from Sabanci University, Turkey, in 2014. During his master’s studies, he worked on bio-inspired microswimmers which are aimed to be used in biomedical applications such as targeted drug delivery and opening clogged arteries. He received his Ph.D. degree in Bioengineering and Robotics from Istituto Italiano di Tecnologia (IIT) and Universita` Degli Studi di Genova, Italy, in 2018. During his Ph.D. studies, he worked on developing a compact magnetic laser scanner to enable the high-speed laser scanning and non-contact laser tissue ablations in hard-to-reach surgical sites. Currently, he is a postdoctoral researcher in Biomedical Robotics Laboratory, Department of Advanced Robotics, IIT. His research interests include medical robotics, 5G telesurgery, laser microsurgery, magnetically-actuated micromanipulators, and microswimmers.
Autonomous robots have the potential to transform our everyday lives, yet most of today’s robots struggle in the real world. My research focuses on this issue by building foundations of robust deep learning and autonomy. In this talk, I will describe some of our recent work on analyzing the robustness properties of deep neural networks, robustifying learned deep RL policies, and estimating the reachable sets of neural feedback loops.
Dr. Michael Everett Reliability in Robot Learning: Developing resilient autonomous systems for society, MIT (Aerospace Controls Lab)
Michael Everett is a Postdoctoral Associate at the MIT Department of Aeronautics and Astronautics and conducts research in the Aerospace Controls Laboratory. He received the PhD (2020), SM (2017), and SB (2015) degrees from MIT in Mechanical Engineering. His research addresses fundamental gaps at the intersection of machine learning, robotics, and control theory, with recent emphasis on developing the theory of safe and robust neural feedback loops. His works have been recognized with numerous awards and covered by major media outlets.
High-power jumping robots can rapidly traverse large obstacles, but the resulting fast and forceful motion is challenging for control and estimation. In this talk, I will present my work developing a small monopedal jumping robot, Salto-1P. Salto-1P achieved superlative jumping performance and demonstrated precise control, onboard estimation, and dynamic transitions between jumping and balancing.
Dr. Justin Yim Saltatorial Locomotion on Terrain Obstacles, Carnegie Mellon University (Robomechanics Lab)
Justin Yim is a Computing Innovation Fellow at Carnegie Mellon University working with Aaron Johnson. He received his Ph.D. in Electrical Engineering at the University of California, Berkeley advised by Ronald Fearing in 2020. He received a B.S.E. and M.S.E. from the University of Pennsylvania.
In this talk, I will present how tactile and force information can be used to obtain richer information relevant to the grasping and manipulation of objects. I will discuss how tactile perception enables a system to go from raw tactile and force signals into higher level contact information, such as slippage, object pose, material identification, and collision detection.
Dr. Joao Bimbo Contact Sensing for Robot Grasping, Yale University (The GRAB lab)
Joao Bimbo is currently a Postdoctoral Associate at the GrabLab in Yale University. He obtained his PhD in Robotics at King's College London, before moving to the Italian Institute of Technology in Italy. His main research interest are in robot grasping and tactile sensing.