Zoom Better to See Clearer: Human Part Segmentation with Auto Zoom Net

TitleZoom Better to See Clearer: Human Part Segmentation with Auto Zoom Net
Publication TypeConference Paper
Year of Publication2016
AuthorsXia, F, Wang, P, Chen, L-chieh, Yuille, A
Conference NameECCV
Abstract

Parsing articulated objects, e.g . humans and animals, into semantic parts ( e.g . body, head and arms, etc .) from natural images is a challenging and fundamental problem for computer vision. A big dif- ficulty is the large variability of scale and location for objects and their corresponding parts. Even limited mistakes in estimating scale and loca- tion will degrade the parsing output and cause errors in boundary details. To tackle these difficulties, we propose a “Hierarchical Auto-Zoom Net” (HAZN) for object part parsing which adapts to the local scales of ob- jects and parts. HAZN is a sequence of two “Auto-Zoom Nets” (AZNs), each employing fully convolutional networks that perform two tasks: (1) predict the locations and scales of object instances (the first AZN) or their parts (the second AZN); (2) estimate the part scores for predicted object instance or part regions. Our model can adaptively “zoom” (re- size) predicted image regions into their proper scales to refine the parsing. We conduct extensive experiments over the PASCAL part datasets on humans, horses, and cows. For humans, our approach significantly out- performs the state-of-the-arts by 5% mIOU and is especially better at segmenting small instances and small parts. We obtain similar improve- ments for parsing cows and horses over alternative methods. In summary, our strategy of first zooming into objects and then zooming into parts is very effective. It also enables us to process different regions of the image at different scales adaptively so that, for example, we do not need to waste computational resources scaling the entire image.

Research Area: 

CBMM Relationship: 

  • CBMM Funded