Tutorials
Tutorial 1
Title: Multi Agent Systems for Emergency Response - 9:30 AM US Central time to 12 PM US Central
Abstract:
Emergency response to incidents such as accidents, crimes, and wildfires is a major problem faced by communities. Emergency response management (ERM) comprises several stages and sub-problems like forecasting, detection, allocation, and dispatch. The design of principled approaches to tackle each problem is necessary to create efficient ERM pipelines. This talk will go through the design of principled decision-theoretic and data-driven approaches to tackle emergency incidents. It will discuss the data collection, cleansing, and aggregation as well as some models and methods we used to solve an imbalanced classification problem. Further, we will explain how large multi-agent systems can be used to tackle emergency scenarios under dynamic environments and communication and state uncertainty. We will go through fundamental modeling paradigms like Markov decision processes, semi-Markov decision processes, and partially-observable Markov decision processes and how promising actions can be found for stochastic control problems. As case studies, we will specifically look at emergency incidents like wildfires and road accidents. We will also go through two open-source datasets that we have created for the research community to use regarding traffic accidents and wildfires.
Organizers:
Dr. Ayan Mukhopadhyay is a research scientist at Vanderbilt University, USA. His research interests are multi-agent systems, robust machine learning, and decision-making under uncertainty. Prior to this, he was a post-doctoral research fellow at the Stanford Intelligent Systems Lab at Stanford University, USA. He was awarded the 2019 CARS post-doctoral fellowship by the Center of Automotive Research at Stanford (CARS). He finished his doctorate in Computer Science at Vanderbilt University’s Computational Economics Research Lab, and his doctoral thesis on robust decision-making for emergency response was nominated for the Victor Lesser Distinguished Dissertation Award 2020. He works on multi-agent systems to tackle societal problems, and was recently awarded the Google AI Impact Scholar Award 2021.
Title: Multi Agent Systems for Emergency Response - 9:30 AM US Central time to 12 PM US Central
Abstract:
Emergency response to incidents such as accidents, crimes, and wildfires is a major problem faced by communities. Emergency response management (ERM) comprises several stages and sub-problems like forecasting, detection, allocation, and dispatch. The design of principled approaches to tackle each problem is necessary to create efficient ERM pipelines. This talk will go through the design of principled decision-theoretic and data-driven approaches to tackle emergency incidents. It will discuss the data collection, cleansing, and aggregation as well as some models and methods we used to solve an imbalanced classification problem. Further, we will explain how large multi-agent systems can be used to tackle emergency scenarios under dynamic environments and communication and state uncertainty. We will go through fundamental modeling paradigms like Markov decision processes, semi-Markov decision processes, and partially-observable Markov decision processes and how promising actions can be found for stochastic control problems. As case studies, we will specifically look at emergency incidents like wildfires and road accidents. We will also go through two open-source datasets that we have created for the research community to use regarding traffic accidents and wildfires.
Organizers:
Dr. Ayan Mukhopadhyay is a research scientist at Vanderbilt University, USA. His research interests are multi-agent systems, robust machine learning, and decision-making under uncertainty. Prior to this, he was a post-doctoral research fellow at the Stanford Intelligent Systems Lab at Stanford University, USA. He was awarded the 2019 CARS post-doctoral fellowship by the Center of Automotive Research at Stanford (CARS). He finished his doctorate in Computer Science at Vanderbilt University’s Computational Economics Research Lab, and his doctoral thesis on robust decision-making for emergency response was nominated for the Victor Lesser Distinguished Dissertation Award 2020. He works on multi-agent systems to tackle societal problems, and was recently awarded the Google AI Impact Scholar Award 2021.
Dr. Sayyed Mohsen Vazirizade is a post-doctoral research fellow at Vanderbilt University, Department of Electrical Engineering and Computer Science. As a member of SCOPE (Smart and resilient Computing for Physical Environment), he works on multiple projects including developing Artificial Intelligence agents for integrated data-driven technologies for smart cities. He earned his Ph.D. in Civil Engineering from the University of Arizona in 2020. His main research focuses are on risk and reliability engineering, statistical modeling and prediction, and machine learning.
Contact: Ayan Mukhopadhyay ([email protected]), Sayyed Mohsen Vazirizade ([email protected])
More Information
Contact: Ayan Mukhopadhyay ([email protected]), Sayyed Mohsen Vazirizade ([email protected])
More Information
Audience expectation and prerequisites:
- Basic Knowledge of Machine Learning.
- Some knowledge about decision-making under Uncertainty (Markov-Decision Processes).
- We will provide introductory material on both. While some prerequisite knowledge will certainly be helpful, we welcome participants with no prior knowledge in these domains.
- Ayan Mukhopadhyay (Vanderbilt)
- Sayyed Mohsen Vazirizade (Vanderbilt)
- Hemant Purohit (GMU)
- Tina Diao (Stanford) ~ Tentative
Tutorial 2
Title: 3D sensing for autonomous robots and smart infrastructure – 12:30 PM US Central to 2:30 PM US central
Abstract:
This tutorial aims to give an overview of 3D sensing techniques, methodologies, algorithms and systems for a variety of applications in autonomous vehicles and smart cities. The tutorial starts by introducing various 3D sensing mechanisms and devices including lidar, stereo camera, radar, as well as basic sensor fusion. Next, the tutorial discusses current 3D sensing and perception problems and tasks using these sensors. These tasks range from object detection, to segmentation, 3D shape manipulation, motion prediction, mapping, odometry and geometry, and robot navigation and control. For these tasks, the tutorial will then cover some of the state-of-the-art approaches using different 3D sensing modalities, for example, widely used point cloud processing backbones, as well as task-specific methods, such as semantic and instance segmentation, simultaneous localization and mapping (SLAM) and path planning. Also, leveraging these 3D sensing capabilities, the tutorial will discuss some example end-to-end systems from recent literature that explore collaborative sensing and control. The tutorial includes an invited talk from Waymo about the latest lidar perception technology being deployed in real-world autonomous vehicles. Finally, the tutorial concludes with entry points for the audience to get their hands dirty: an overview of a plethora of available simulation tools and evaluation methods and benchmarks, with a deep dive demo, using one example photorealistic simulator, Carla, to demonstrate how to use 3D sensors, experiment with custom scenarios, visualize and process data, and drive vehicles autonomously.
Duration: half day
Presenters:
Hang Qiu is a postdoctoral scholar in the Platform Lab at Stanford University. Previously, he received his Ph.D. from University of Southern California and B.S from Shanghai Jiao Tong University. His research focuses on networked systems problems at the intersection of mobile sensing, edge computing, networking, and machine learning. His recent work enables cooperative perception among networked vehicles and infrastructure sensors that can substantially augment perception and driving capabilities.
Title: 3D sensing for autonomous robots and smart infrastructure – 12:30 PM US Central to 2:30 PM US central
Abstract:
This tutorial aims to give an overview of 3D sensing techniques, methodologies, algorithms and systems for a variety of applications in autonomous vehicles and smart cities. The tutorial starts by introducing various 3D sensing mechanisms and devices including lidar, stereo camera, radar, as well as basic sensor fusion. Next, the tutorial discusses current 3D sensing and perception problems and tasks using these sensors. These tasks range from object detection, to segmentation, 3D shape manipulation, motion prediction, mapping, odometry and geometry, and robot navigation and control. For these tasks, the tutorial will then cover some of the state-of-the-art approaches using different 3D sensing modalities, for example, widely used point cloud processing backbones, as well as task-specific methods, such as semantic and instance segmentation, simultaneous localization and mapping (SLAM) and path planning. Also, leveraging these 3D sensing capabilities, the tutorial will discuss some example end-to-end systems from recent literature that explore collaborative sensing and control. The tutorial includes an invited talk from Waymo about the latest lidar perception technology being deployed in real-world autonomous vehicles. Finally, the tutorial concludes with entry points for the audience to get their hands dirty: an overview of a plethora of available simulation tools and evaluation methods and benchmarks, with a deep dive demo, using one example photorealistic simulator, Carla, to demonstrate how to use 3D sensors, experiment with custom scenarios, visualize and process data, and drive vehicles autonomously.
Duration: half day
Presenters:
Hang Qiu is a postdoctoral scholar in the Platform Lab at Stanford University. Previously, he received his Ph.D. from University of Southern California and B.S from Shanghai Jiao Tong University. His research focuses on networked systems problems at the intersection of mobile sensing, edge computing, networking, and machine learning. His recent work enables cooperative perception among networked vehicles and infrastructure sensors that can substantially augment perception and driving capabilities.
Dian Chen is a third-year PhD student at UT Austin. His research interests lie in robotics, computer vision and machine learning, and autonomous driving. He previously graduated from UC Berkeley majoring in applied mathematics and computer science
Weiyue Wang is a software engineer at Waymo Perception. Before that, she was a Ph.D. student at University of Southern California, working in Computer Graphics and Immersive Technologies Lab under Prof. Ulrich Neumann. Her research focuses on computer vision, particularly 3D scene understanding and reconstruction.
Contact: Hang Qiu ([email protected]), Dian Chen ([email protected])
More information
Audience expectation and prerequisites:
Contact: Hang Qiu ([email protected]), Dian Chen ([email protected])
More information
Audience expectation and prerequisites:
- Students entering the 3D sensing field
- Basic knowledge of sensing systems and deep learning.