A Brief Introduction
About e-mail inquiries, see this
Jacopo is a postdoctoral fellow at the University of Toronto (Toronto, ON, Canada) Institute for Aerospace Studies (UTIAS) working for prof. Angela Schoellig in the Dynamic Systems Lab. Jacopo’s research interests include reinforcement learning, multi-robot systems, probabilistic graphical models, software engineering, and embedded computing. Jacopo holds a Ph.D. degree in computer engineering from Polytechnique Montréal (Montréal, QC, Canada) and received the M.Sc. degree in computer science from the University of Illinois at Chicago (Chicago, IL) in 2012, the Laurea Triennale degree in computer engineering from Politecnico di Milano (Milan, Italy) in 2009, and the Laurea Specialistica degree in computer engineering again from Politecnico di Milano in 2011. In 2015, Jacopo was a visiting researcher at the National Institute of Informatics (Tokyo, Japan) and attended the International Space University (ISU)’s Space Studies Program in Athens, OH. In 2017, Jacopo also served as a teaching associate for ISU in Cork, Ireland. In 2019, Jacopo was a visiting postdoctoral fellow at the European Astronaut Centre (Köln, Germany) and worked as a research associate at the University of Cambridge’s (Cambridge, UK) Department of Computer Science and Technology.
- Safe Learning Control (website). As data- and learning-based methods gain traction, researchers must also understand how to leverage them in real-world robotic systems, where implementing and guaranteeing safety is imperative—to avoid costly hardware failures and allow deployment in the proximity of human operators. The last half-decade has seen a steep rise in the number of contributions to this area from both the control and reinforcement learning communities. To demystify and unify the language and frameworks used in control theory and reinforcement learning research—as well as to facilitate fair comparisons between these fields—we propose a set of physics-based benchmarks with intuitive APIs to support the implementation of safe and robust learning control.
safe-control-gym—physics-based reinforcement learning environments with symbolic dynamics and constraints (video)
- Mitacs Elevate postdoctoral fellowship: Multi-agent reinforcement learning for decentralized UAV/UGV cooperative exploration. Over the last decade, artificial intelligence has flourished. From a research niche, it has been developed into a versatile tool, seemingly on route to bring automation into every aspect of human life. At the same time, robotics technology has also advanced significantly, and inexpensive multi-robot systems promise to accomplish all those tasks that require both physical parallelism and inherent fault tolerance—such as surveillance and extreme-environment exploration. GDLS-C and the University of Toronto are investigating how to effectively use multi-agent reinforcement learning in field robotics. GDLS-C’s goal is to improve situational awareness of ground vehicles by using heterogeneous teams of Unmanned Ground Vehicles (UGVs) and swarms of Unmanned Aerial Vehicles (UAVs). Learning decentralized cooperation strategies will improve the resilience of these multi-robot systems—potentially faced with adversarial environments—and, ultimately, the safety of their human operators. Answering our research questions will also enable large collections of robots to learn how to interact with one another—beyond the point human designers can attain.
gym-pybullet-drones—a physics-based quadcopter control simulation for multi-agent reinforcement learning (video)
gym-marl-reconnaissance—environments for cooperative multi-agent reinforcement learning in heterogeneous robot teams
- Mitacs Globalink research at the University of Cambridge’s Prorok Laboratory on adversarial private flocking. Privacy is an important facet of defence against adversaries. In this project, we introduced the problem of private flocking. We considered a team of mobile robots flocking in the presence of an adversary who is interested in identifying their leader. We developed a method to generate private flocking controllers that hide the identity of the leader robot. Our concurrent approach towards privacy improvement leverages machine learning and game theory. To evaluate the performance of our co-optimization scheme, we investigated different classes of reference trajectories for the robots. Although it is reasonable to assume that there is an inherent trade-off between flocking performance and privacy, our results demonstrate that we are able to achieve high flocking performance and simultaneously reduce the risk of revealing the leader.
ESA’s Networking/Partnering Initiative postdoctoral research: A Symbiotic Human and Multi-Robot Planetary Exploration System. The availability of next generation heavy launchers—such as NASA’s SLS and SpaceX’s Falcon Heavy—will enable new planetary exploration missions. Most space agencies are now targeting the Moon as the next step in exploration beyond LEO and many already have plans for precursor robotic and human exploration. For example, ESA has championed the “Moon Village” concept since 2016 and NASA’s 2018 budget includes a Lunar Exploration Campaign. In this context, natural caves are appealing solutions to shelter humans and equipment for long-duration Lunar missions. In 2017, data from JAXA’s Kaguya probe revealed a 50km-long lava tube. For safety reasons, the preliminary robotic exploration of these tubes is imperative. Multi-robot systems carry the potential for greater efficiency and higher fault-tolerance, because of their ability to cooperate and inherent redundancy. Including humans into these systems is desirable to mitigate the complexity of the system but challenging at the interface level.
Jacopo’s doctoral research focused on adaptive computing system for aerospace. Today’s computer systems are growing more and more complex at an unmanageable pace, and space, in particular, represents a challenging environment for them. Self-adaptive computing carries unmatched potential and great promises for the creation of a new generation of smart, more reliable computers. Drawing from the fields of artificial intelligence, autonomic computing, and reconfigurable systems, we aim at developing self-adaptive computer systems for aerospace. Our goal is to improve efficiency, fault-tolerance, and computational capabilities of aerospace computer systems, allowing them to perform their tasks for longer periods of time, fostering simpler and cheaper space exploration.
From 2012 to 2017, Jacopo acted as the Command&Data-Handling team leader of the student society Société Technique PolyOrbite developing a 3U nanosatellite for the Canadian Satellite Design Challenge (CSDC), a Canada-wide competition for teams of university students. PolyOrbite obtained the 3rd overall place in the 2012-14 and 2014-16 editions of the CSDC, as well as the CSDC’s Educational Outreach prize in 2016 Website, Facebook.
- At the University of Toronto: lab co-development of “AER1216, Fundamentals of Unmanned Aerial Vehicles Teaching” and “AER1517, Control for Robotics”.
- At Polytechnique Montréal: laboratory assistant and co-development of INF1600, Architecture des Micro-ordinateurs and AER8300, Informatique des Systèmes Spatiaux.
- For the International Space University: teaching associate of team project “Entrepreneurial And Innovation Ecosystem For Space” at SSP17 in Cork, Ireland; visiting lecturer of “GPS-denied Navigation with Miniaturized Quadcopters”.
Scholarship and Awards
- Z. Yuan, A. W. Hall, S. Zhou, L. Brunke, M. Greeff, J. Panerati, and A. P. Schoellig (2022) safe-control-gym: a Unified Benchmark Suite for Safe Learning-based Control and Reinforcement Learning in Robotics - IEEE Robotics and Automation Letters
- L. Brunke, M. Greeff, A. W. Hall, Z. Yuan, S. Zhou, J. Panerati, and A. P. Schoellig (2021) Safe Learning in Robotics: From Learning-Based Control to Safe Reinforcement Learning - Annual Reviews of Control, Robotics, and Autonomous Systems
- J. Panerati, H. Zheng, S. Zhou, J. Xu, A. Prorok, and A. P. Schoellig (2021) Learning to Fly—a Gym Environment with PyBullet Physics for Reinforcement Learning of Multi-agent Quadcopter Control - International Conference on Intelligent Robots and Systems
- W. Zhao, J. Panerati, and A. P. Schoellig (2021) Learning-based Bias Correction for Time Difference of Arrival Ultra-wideband Localization of Resource-constrained Mobile Robots - IEEE Robotics and Automation Letters
- J. Panerati, B. Ramtoula, D. St-Onge, Y. Cao, M. Kaufmann, A. Cowley, L. Sabattini, and G. Beltrame (2021) On the Communication Requirements of Decentralized Connectivity Control - A Field Experiment - 2021 International Symposium on Distributed Autonomous Robotic Systems
- R. Mitchell, J. Fletcher, J. Panerati, and A. Prorok (2020) Multi-Vehicle Mixed-Reality Reinforcement Learning for Autonomous Multi-Lane Driving - International Conference on Autonomous Agents and Multiagent Systems
- H. Zheng, J. Panerati, G. Beltrame, and A. Prorok (2020) An Adversarial Approach to Private Flocking in Mobile Robot Teams - IEEE Robotics and Automation Letters, Vol.5 no.2
- D. St-Onge, M. Kaufmann, J. Panerati, B. Ramtoula, Y. Cao, E. B. J. Coffey, and G. Beltrame (2019) Planetary exploration with robot teams - IEEE Robotics & Automation Magazine
- J. Panerati, M. A. Schnellmann, C. Patience, G. Beltrame, and G. S. Patience (2019) Experimental Methods in Chemical Engineering: Artificial Neural Networks—ANNs - The Canadian Journal of Chemical Engineering
- J. Panerati, M. Minelli, C. Ghedini, L. Meyer, M. Kaufmann, L. Sabattini, and G. Beltrame (2019) Robust Connectivity Maintenance for Fallible Robots - Autonomous Robots
- J. Panerati, N. Schwind, S. Zeltner, K. Inoue, and G. Beltrame (2018) Assessing the Resilience of Stochastic Dynamic Systems under Partial Observability - PLOS ONE
- M. Minelli, M. Kaufmann, J. Panerati, C. Ghedini, G. Beltrame, and L. Sabattini (2018) Stop, Think, and Roll: Online Gain Optimization for Resilient Multi-robot Topologies - 2018 International Symposium on Distributed Autonomous Robotic Systems
- G. Beltrame, E. Merlo, J. Panerati, and C. Pinciroli (2018) Engineering Safety in Swarm Robotics - 2018 IEEE/ACM International Workshop on Robotics Software Engineering
- M. Kaufmann, J. Panerati, and G. Beltrame (2018) Towards a Symbiotic Human and Multi-Robot Planetary Exploration System: Resilient Topologies for Space Exploration - Robotics: Science and Systems Workshop: Bridging the Gap in Space Robotics 2018
- J. Panerati, L. Gianoli, C. Pinciroli, A. Shabah, G. Nicolescu, and G. Beltrame (2018) From Swarms to Stars: Task Coverage in Robot Swarms with Connectivity Constraints - 2018 International Conference on Robotics and Automation
- J. Panerati, D. Sciuto, and G. Beltrame (2017) Optimization Strategies in Design Space Exploration - Handbook of Hardware/Software Codesign, Springer
- C. Fodé, J. Panerati, P. Desroches, M. Valdatta, and G. Beltrame (2015) Monitoring Glaciers from Space Using a Cubesat - IEEE Communications Magazine
- J. Panerati, M. Maggio, M. Carminati, F. Sironi, M. Triverio, and M. D. Santambrogio (2014) Coordination of Independent Loops in Self-Adaptive Systems - ACM Transactions on Reconfigurable Technology and Systems, Vol.7 no.2
- J. Panerati, and G. Beltrame (2014) A Comparative Evaluation of Multi-Objective Exploration Algorithms for High-Level Design - ACM Transactions on Design Automation of Electronic Systems, Vol.19 no.2
- M. Maggio, H. Hoffmann, A. V. Papadopoulos, J. Panerati, M. D. Santambrogio, A. Agarwal, and A. Leva (2012) Comparison of Decision Making Strategies in Autonomic Computing - ACM Transactions on Autonomous and Adaptive Systems, Vol.7 no.4
- J. Panerati and G. Beltrame (2015) Trading Off Power and Fault-tolerance in Real-time Embedded Systems - 2015 NASA/ESA Conference on Adaptive Hardware and Systems
- J. Panerati, S. Abdi, and G. Beltrame (2014) Balancing System Availability and Lifetime with Dynamic Hidden Markov Models - 2014 NASA/ESA Conference on Adaptive Hardware and Systems
- J. Panerati, F. Sironi, M. Carminati, M. Maggio, G. Beltrame, P. J. Gmytrasiewicz, D. Sciuto, and M. D. Santambrogio (2013) On Self-Adaptive Resource Allocation through Reinforcement Learning - 2013 NASA/ESA Conference on Adaptive Hardware and Systems
I read ALL my emails, spam folders included. If I do not reply within a couple of days, I might be having a busy week: feel free to send me a reminder. If I do not to reply to the reminder, please assume that I just do not have a good answer and take this as a broad spectrum apology.