AI Begins to Understand the 3-D World [MIT Technology Review]

December 9, 2016

"Research on artificial intelligence moves from 2-D to 3-D representations of the world—work that could lead to big advances in robotics and automated driving.

There’s been some stunning progress in artificial intelligence of late, but it’s been surprisingly flat.

Now AI researchers are moving beyond two-dimensional images and pixels. Instead they’re building systems capable of picturing the three-dimensional world and taking action. The work could have a big impact on robotics and self-driving cars, helping to make machines that can learn how to act more intelligently in the real world.

“An exciting and important trend is the move in learning-based vision systems from just doing things with images to doing things with three-dimensional objects,” says Josh Tenenbaum, a professor in MIT’s Department of Brain and Cognitive Sciences. “That includes seeing objects in depth and modeling whole solid objects—not just recognizing that this pattern of pixels is a dog or a chair or table.”

Tenenbaum and colleagues used a popular machine-learning technique known as generative adversarial modeling to have a computer learn about the properties of three-dimensional space from examples. It could then generate new objects that are realistic and physically accurate. The team presented the work this week at the Neural Information Processing System conference in Barcelona, Spain..."

Read the full article on MIT Technology Review's website through the link below.