Robotics Afternoon

Robotics Afternoon

About

The Robotics Afternoon is part of the summer course organized by the Center for Brain Minds and Machines. The event will take place in Woodshole, MA on August 20th starting at 1:30PM

The afternoon will feature a series of 5 talks covering different aspects of modern research in robotics.
After the talks we will host a panel discussion among the invited speakers.
The event will be closed by a session of live demos by the iCub robot, which will be in Woodshole for the whole duration of the summer course!

Venue

The Robotics Afternoon will take place in the Lillie Auditorium.

Date & Time

The Robotics Afternoon will be on August 20th, starting at 1:30pm.

Speakers

Russ Tedrake

Russ Tedrake
MIT
John Leonard

John Leonard
MIT
Tony Prescott

Tony Prescott
University of Sheffield
Giorgio Metta

Giorgio Metta
Istituto Italiano di Tecnologia
Stefanie Tellex

Stefanie Tellex
Brown University

Schedule

TIME SPEAKER TITLE
13:30 Russ Tedrake MIT's Entry in the DARPA Robotics Challenge
14:10 John Leonard Mapping, Localization, and Self-Driving Vehicles
14:50 Tony Prescott Understanding sensorimotor co-ordination and brain
architecture through mammal-like robots
15:30 Break  
15:50 Stefanie Tellex Human-Robot Collaboration
16:30 Giorgio Metta iCub, an open source platform for research in robotics & AI
     
17:15 Panel Discussion Moderated by Prof. Patrick H. Winston
     
18:00 iCub Demos iCub interactive object learning

Abstracts

Russ Tedrake

MIT's Entry in the DARPA Robotics Challenge

Russ Tedrake

On June 5-6 of this year, 25 of the most advanced robots in the world descended on Pomona, California to compete in the final DARPA Robotics Challenge competition (http://theroboticschallenge.org). Each of these robots was sent into a mock disaster response situation to perform complex locomotion and manipulation tasks with limited power and comms. Team MIT used the opportunity to showcase the power of our relatively formal approaches to perception, estimation, planning, and control.

In this talk, I’ll dig into a number of technical research nuggets that have come to fruition during this effort, including an optimization-based planning and control method for robust and agile online gait and manipulation planning, efficient mixed-integer optimization for negotiating rough terrain, convex relaxations for grasp optimization, powerful real-time perception systems, and essentially drift-free state estimation. I’ll discuss the formal and practical challenges of fielding these on a very complex (36+ degree of freedom) humanoid robot that absolutely had to work on game day. Relevant URLs: http://drc.mit.edu, http://youtube.com/mitdrc.

 
John Leonard

Mapping, Localization, and Self-Driving Vehicles

John Leonard

This talk will discuss the critical role of mapping and localization in the development of self-driving vehicles. After a discussion of some of the recent amazing progress and open technical challenges in the development of self-driving vehicles, we will discuss the past, present and future of Simultaneous Localization and Mapping (SLAM) in robotics. We will review the history of SLAM research and will discuss some of the major challenges in SLAM, including choosing a map representation, developing algorithms for efficient state estimation, and solving for data association and loop closure. We will also present recent results on real-time dense mapping using RGB-D cameras and object-based mapping in dynamic environments. Finally, we will discuss the potential connections between SLAM and models for grid cells and place cells in the entorhinal cortex (based on our collaboration with Mike Hasselmo's group at BU.)

 
Tony Prescott

Understanding sensorimotor co-ordination and brain architecture through mammal-like robots

Tony Prescott

For the past decade, my lab in Sheffield has been investigating sensorimotor co-ordination in mammals through the lens of vibrissal (whiskered) touch in rodents, and via the methodology of embodied computational neuroscience—using biomimetic robots to synthesize and investigate models of mammalian brain architecture. Our work to date has focused on five major brain sub-systems and their likely role in sensorimotor co-ordination for active sensing—superior colliculus, basal ganglia, somatosensory cortex, cerebellum, and hippocampus. With respect to each of these we have shown how embodied modelling can help elucidate the likely function of these brain sub-systems in awake behaving animals. Our research also demonstrates how the appropriate co-ordination of these sub-systems, with a model of brain architecture, can give rise to integrated behaviour in life-like whiskered robots. More recently we have extended this work to investigate human cognitive architecture in the iCub robot, particularly in relation to episodic memory and the sense of self, and have developed a commercial biomimetic robot (MiRo) that encapsulates many of the principles investigated in our research robot platforms. This talk will give a brief overview of these different strands of our neurorobotic research.

Stefanie Tellex

Human-Robot Collaboration

Stefanie Tellex

Creating robots that collaborate with humans requires the ability to carry out complex actions in real-world environments by actively coordinating with people. A critical capability is robustly performing actions in real-world environments; I will discuss our work toward enabling instance-based mapping and pick-and-place for objects using vision and active object exploration. Second we need robots that can plan in very large state/action spaces, because people can and do command robots to carry out both very high-level actions ("clean up the table") as well as very low-level actions ("move your hand up a bit"). Inferring plans in these spaces requires reasoning over multiple levels of abstraction. In my lab we are focusing on the simulation domain Minecraft as a test domain for planning with multiple levels of abstraction in a very large state/action space and learning to plan in large state spaces from experience. Finally, robots need to actively detect and recover from failures to communicate by participating in social feedback loops with their human partner. Humans continuously monitor their partner for understanding and continuously communicate their own understanding of what their partner is saying and doing; as soon as a failure is detected, either partner initiates a recovery process, such as asking a question. I will discuss our approach to this problem by incrementally interpreting a person's language and gesture in real time, enabling a robot to produce a real-time response.

Giorgio Metta

iCub, an open source platform for research in robotics & AI

Giorgio Metta

I will present the iCub humanoid, a robotic platform designed for research in embodied cognition. At 104 cm tall, the iCub has the size of a three and half year old child. It can crawl on all fours, walk and sit up to manipulate objects. Its hands have been designed to support sophisticate manipulation skills. The iCub is distributed as Open Source following the GPL/LGPL licenses and can now count on a worldwide community of enthusiastic developers. The entire design is available for download from the project homepage and repository (http://www.iCub.org). More than 25 robots have been built so far which are available in laboratories in Europe, US, Korea and Japan. It is one of the few platforms in the world with a sensitive full-body skin to deal with the physical interaction with the environment including possibly people.

The iCub stance on artificial intelligence posits that manipulation plays a fundamental role in the development of cognitive capability. As many of these basic skills are not ready-made at birth, but developed during ontogenesis, we aim at testing and developing this paradigm through the creation of a child-like humanoid robot: i.e. the iCub. This “baby” robot is meant to act in cognitive scenarios, performing tasks useful for learning while interacting with the environment and humans. The small (104cm tall), compact size (approximately 25kg and fitting within the volume of a child) and high number (53) of degrees of freedom combined with the Open Source approach distinguish the iCub from other humanoid robotics projects developed worldwide.

iCub

The iCub is a humanoid robot developed at the Istituto Italiano di Tecnologia (IIT) as part of the EU project RobotCub and subsequently adopted by more than 20 laboratories worldwide.

It has 53 motors that move the head, arms & hands, waist, and legs. It can see and hear using cameras and microphones, it has the sense of proprioception (body configuration) and movement (using accelerometers and gyroscopes). iCub is also equipped with artificial skin, which provides it the sense of touch and allows it to grade how much force it exerts on the environment.

Organizers

Carlo Ciliberto

Giorgio Metta
Istituto Italiano di Tecnologia
Tomaso Poggio

Tomaso Poggio
MIT