The first USC Symposium on the Futures of Robotics will be held on the USC campus (TCC 227) on December 7. The academic community is cordially invited. An agenda is below.
The symposium is a day long set of talks by young- to mid-career roboticists breaking new ground in emerging areas in robotics and related fields. The symposium keynote will be given by Professor Henrik Christensen, a key driving force behind the creation of the newly-announced National Robotics Initiative.
Talk outlines and biographies
Ashis Banerjee - Multi-Particle Path Planning using Optical Tweezers
Optical tweezers have emerged as one of the most promising non-contact manipulation techniques at the small scales; they can successfully trap and transport objects in fluid media. In other words, they can be viewed as miniature robots made out of focused light beams. Autonomous operation requires path planning, which is challenging due to the stochastic Brownian motion of the objects, noise in the imaging based measurements, and the need for fast control update rates.
I will discuss an approximate partially observable Markov decision process algorithm to compute near-optimal trap locations and velocities that minimize the expected transport time of individual dielectric particles by including collision avoidance and recovery steps. This algorithm is incorporated within a decoupled and prioritized framework to move multiple particles simultaneously, and an iterative bipartite graph matching algorithm is employed to optimally assign goal locations to target particles. Effective planner performance is demonstrated using both simulation and physical experiments with 2 micrometer diameter silica beads in a holographic tweezers set-up. Successful runs show that the planner is customizable and can transport specific particles efficiently by either circumventing or trapping other freely diffusing particles.
I will conclude by briefly presenting a regression algorithm for online estimation of near- optimal solutions of higher dimensional planning programs involving concurrent transport of tens to hundreds of particles. This kind of path planning will play a significant role in automating biological cell culture studies by trapping the cells indirectly using particles that are arranged in gripper-like configurations to avoid damage from direct laser exposure.
Ashis Gopal Banerjee is a Postdoctoral Associate in the Computer Science and Artificial Intelligence Laboratory at Massachusetts Institute of Technology. He completed his Ph.D. in Mechanical Engineering at the University of Maryland, College Park, in 2009. Prior to that, he obtained his Masters degree at the University of Maryland in 2006 and Bachelors degree at the Indian Institute of Technology, Kharagpur, in 2004. He received the 2009 Best Dissertation Award from the Department of Mechanical Engineering and the 2009 George Harhalakis Outstanding Systems Engineering Graduate Student Award from the Institute of Systems Research at the University of Maryland. His research interests span a broad area covering planning and control under uncertainty, micro and nano manipulation, machine learning, biophysics, multi-scale physics-based modeling and simulation, and computational geometry.
Spring Berman - A Scalable Approach to Designing Robot Control Policies for Macroscopic Swarm Behaviors
In recent years, there has been a growing interest in developing swarm robotic systems, which would consist of hundreds or thousands of autonomous, resource-constrained robots, for applications including environmental monitoring, search-and-rescue, exploration, battlefield and disaster area communication, and construction. A key challenge in the development of robotic swarms is the design of robot controllers that reliably produce a target macroscopic outcome. In this talk, I present a top-down, scalable methodology for synthesizing decentralized robot control policies with probabilistic guarantees on performance. It applies most generally to arbitrarily spatially distributed robots that rely on local information, follow a velocity field, and exhibit a combination of inherent and intentional stochasticity in their motion and decisions to switch between tasks. The methodology is based on an abstraction of the swarm to an advection-diffusion-reaction continuous model, whose parameters can be optimized and mapped onto individual robot behaviors. I demonstrate the application of this approach to problems of swarm task allocation without inter-robot communication, product assembly in a swarm robotic manufacturing system, and commercial crop pollination by micro-robotic bees. I also present a study of the mechanics of group food retrieval in ants and a model of this behavior as a strategy for multi-robot collective transport.
Spring Berman is a Computer Science postdoctoral researcher in Prof. Radhika Nagpal's group at Harvard University. Her research interests include controller synthesis for robotic swarms, analysis of collective behaviors in biology, and optimization of nanosystems for medical applications. She received a Ph.D. in Mechanical Engineering and Applied Mechanics in 2010 from the University of Pennsylvania, where her advisor was Prof. Vijay Kumar, and a B.S.E. in Mechanical and Aerospace Engineering in 2005 from Princeton University.
Bilge Mutlu - Helping Us Help Ourselves: Designing Effective Social Robots
Robots are unique in their ability to afford interaction using the wider range of human communicative mechanisms. Research on human communication suggests that these mechanisms, when used effectively, elicit significant social, cognitive, and task outcomes such as improved learning, rapport, motivation, persuasion, and collaborative task performance. But how could we design robots that take full advantage of these mechanisms to improve our lives? In this talk, I will describe a research program aimed at building a computational understanding of these human mechanisms and using this understanding to design behavioral mechanisms for robots with the goal of achieving these outcomes in human-robot interaction.
Bilge Mutlu is an assistant professor of computer science, psychology, and industrial engineering at the University of Wisconsin–Madison. He directs a research program on designing behavioral mechanisms for robots toward achieving social, cognitive, and task outcomes in key application domains such as education, collaborative work, health, and wellbeing. Dr. Mutlu is the recipient of multiple Best Paper Awards and NSF Career Award. He received his PhD degree from Carnegie Mellon University's Human-Computer Interaction Institute.
Nathan Michael - Autonomous Navigation and Exploration with an Aerial Robot in
Complex 3D Environments
In this talk we will discuss recent progress on the topics of autonomous navigation and exploration with a micro-aerial vehicle (quadrotor) in complex 3D indoor environments. Building on recent technological and algorithmic advancements, we will describe our methodology toward autonomous localization, mapping, planning, and control for an aerial vehicle in 3D environments. To demonstrate the application of these methods, we will discuss results from recent field experiments in Sendai, Japan where a team of ground and aerial vehicles collaboratively mapped three stories of an earthquake damaged building. Given the ability to autonomously navigate in complex environments, we will discuss our approach to autonomous exploration with an aerial robot in multi-floor indoor environments. Toward autonomous exploration, we develop an approach to identify regions of interest for further exploration that prove to be more suitable than existing approaches to exploration given the sensing capabilities of the vehicle in 3D environments. We will detail key challenges in leveraging existing exploration approaches (e.g. frontier-based navigation) in 3D environments to motivate the development of our approach, review the methodology, and provide pertinent experimental results to support the above claim.
Nathan Michael is a Research Assistant Professor in the Department of Mechanical Engineering at the University of Pennsylvania. He received a Ph.D. from the Department of Mechanical Engineering at the University of Pennsylvania in 2008 and transitioned to a position in the Research Faculty in 2010. His research interests include the topics of estimation and control for ground and aerial robots with extensions to multi-robot systems.
Henrik Christensen - Robots as a mechanism to empower people
Over the last 5 decades robots have entered our daily lives, primarily on the factory floor, but also more recently in our homes for menial tasks such as vacuuming. Early applications of robots has been for dirty, dull and dangerous tasks. Moving forward it is however clear that robots offer a unique technology to enable people to perform tasks that they could not perform earlier. The empowerment of people takes on many form such as increased strength, higher efficiency, operation at a distance, etc. Moving forward robot technology will be utilized to empower our workforce to ensure that American companies remain competitive, robots will allow people to choose where they want to spend third part of their lives, and the technology enable first-responders and soldiers to be removed from immediate danger. The National Robotics Initiative was designed to address these kind of problems.
In this presentation we will discuss the general Co-X program that is the focus of the National Robotics Program. We will discuss current research on learning by demonstration that provide factory workers with efficient ways to program new tasks. Flexible sensor based robot systems combined with human robot interaction are essential to build next generation industrial systems. In addition, we will discuss how design, interaction and behavior are key to deployment of next generation home robot systems. We will provide examples of how methods come together in the design of different types of home robots.
Finally, we will discuss a vision for how robots will be an enabling technology that will parallel the internet in impact on our daily lives. It is here to be recognized we will borrow from and integrate technologies from a broad range of other fields such as mechanical design, control engineering, computer science, and human interaction to deliver next generation systems.
Henrik I Christensen is the KUKA Chair of Robotics and a distinguished professor of computing at Georgia Tech. He is also the director of Robotics at Georgia Tech. Dr. Christensen received his initial training in Mechanical Engineering. He was awarded M.Sc. and Ph.D. degrees in Electrical Engineering from Aalborg University, 1987 and 1990, respectively. He has since then held positions at Aalborg University, University of Pannsylvania and Royal Institute of Technology, prior to joining Georgia Tech. The main focus of his research is on a systems approach to robots and computational perception. He has published more than 250 contributions across AI, Robotics and Computer Vision. Dr. Christensen was the founder and coordinators of the European Network of Excellence in Robotics, 1999-2006. He is the founder and coordinator of the US Virtual Organization for Robotics. He also serves as a consultant and advisor to companies and agencies across 3 continents.
Chad Jenkins - Pose, Gesture, and Person Recognition for Human-Robot Teams
A central aim of autonomous robotics is to improve the physical productivity of human users through their collaboration with robotic partners. Communication in such human-robot teams is a critical capability for giving instructions to robots, responding to a user's intentions, and coordinating group behavior. In this talk, I will present our work in recognizing non-verbal features from human, such as pose and gesture, using depth-based cameras on mobile robotic platforms. Using predictive models of human motion and depth sensing, robots are able to recognize, follow, and respond to commands from moving human users operating in real-time across indoor and outdoor environments. I will also discuss our complementary effort to increase the physical plausibility of our recognition methods using dynamical humanoid simulation for prediction.
Stefanie Tellex - Understanding Language of Movement and Manipulation
Natural language is a compelling modality for controlling complex systems such as robots, with its promise of powerful, intuitive interaction. However, robustly understanding language from untrained users is a challenging problem. In this talk I describe a probabilistic approach to understanding natural language commands given to robots. The framework, called Generalized Grounding Graphs, defines a probabilistic graphical model that maps between constituents in the language and objects and actions in the external world. The framework learns models for the meanings of complex verbs such as "put" and "take," as well as spatial relations such as "on" and "to." The model allows efficient inference and learning by using the compositional structure of a natural language command to factor the distribution over interpretations. This factorization enables it to compose learned word meanings and understand novel commands that have never been previously encountered. The system is trained and evaluated using parallel corpora of language paired with robot actions collected using crowd sourcing. Grounding graphs are a first step towards robots that can robustly interact with a human partner using natural language.
Andrea L. Thomaz - Designing Learning Interactions for Robots
In this talk I present recent work from the Socially Intelligent Machine's Lab at Georgia Tech. One of the focuses of our lab is on Socially Guided Machine Learning, building robot systems that can learn from everyday human teachers. We look at standard Machine Learning interactions and redesign interfaces and algorithms to support the collection of learning input from naive humans. This talk covers results on high-level task goal learning, low-level skill learning, and active learning interactions, using several humanoid robot platforms.
Andrea L. Thomaz is an Assistant Professor of Interactive Computing at the Georgia Institute of Technology. She directs the Socially Intelligent Machines lab, which is affiliated with the Robotics and Intelligent Machines (RIM) Center and with the Graphics Visualization and Usability (GVU) Center. She earned a B.S. in Electrical and Computer Engineering from the University of Texas at Austin in 1999, and Sc.M. and Ph.D. degrees from MIT in 2002 and 2006. Dr. Thomaz is published in the areas of Artificial Intelligence, Robotics, Human-Robot Interaction, and Human-Computer Interaction. She received an ONR Young Investigator Award in 2008, and an NSF CAREER award in 2010. Her work has been featured on the front page of the New York Times, and in 2009 she was named one of MIT Technology Review’s Top 35 under 35.
Luis Sentis - Algorithms to Generate Extreme Dynamic Locomotion Maneuvers for Bipedal Robots
Extreme locomotion involves quickly and efficiently negotiating very rough terrains by means of walking, running or climbing as well as leaping using vertical walls and inclined surfaces. Little is known on the nature of these types of skills due to the lack of modeling tools and the complexity of their analysis. However, maneuvering in extreme terrains has critical weight on the advancement of autonomous legged robots and medical assistive devices. In this talk, I will present new mathematical tools to model the dynamics of extreme locomotion maneuvers as well as numerical integration and computational techniques to automatically generate stable gait patterns in the challenging environments.
Dr. Luis Sentis is an Assistant Professor in the University of Texas at Austin where he directs the Human Centered Robotics Laboratory. His research focuses on foundations for the compliant control of humanoid robots, algorithms to generate extreme dynamic locomotion, and building robots for educating students in mechatronics. He holds PhD and MS degrees in Electrical Engineering from Stanford University where he developed leading work in theoretical and computational methods for the compliant control of humanoid robots. Prior to that, he worked in Silicon Valley in the area of clean room automation.
Sanjeev Koppal - Toward Micro Vision Sensors
Achieving computer vision on micro-scale robots is a challenge. On these platforms, the power and mass constraints are severe enough for even the most common computations (matrix inversion, convolution, etc.) to be difficult. This work proposes and analyzes a class of miniature vision sensors that can help overcome these constraints. These sensors reduce power requirements through template-based optical convolution, and they enable a wide field-of-view within a small form through a novel optical design. We describe the trade-offs between the field of view, volume, and mass of these sensors and we provide analytic tools to navigate the design space. We also demonstrate milli-scale prototypes for computer vision tasks such as locating edges, tracking targets, and detecting faces.