%0 Generic %D 2015 %T Parsing Occluded People by Flexible Compositions %A Xianjie Chen %A Alan Yuille %X

This paper presents an approach to parsing humans when there is significant occlusion. We model humans using a graphical model which has a tree structure building on recent work [32, 6] and exploit the connectivity prior that, even in presence of occlusion, the visible nodes form a connected subtree of the graphical model. We call each connected subtree a flexible composition of object parts. This involves a novel method for learning occlusion cues. During inference we need to search over a mixture of different flexible models. By exploiting part sharing, we show that this inference can be done extremely efficiently requiring only twice as many computations as searching for the entire object (i.e., not modeling occlusion). We evaluate our model on the standard benchmarked “We Are Family" Stickmen dataset and obtain significant performance improvements over the best alternative algorithms. 

 

%B Computer Vision and Pattern Recognition (CVPR) %8 06/1/2015 %G eng %1

arXiv:1412.1526

%2

http://hdl.handle.net/1721.1/100199

%0 Generic %D 2014 %T Detect What You Can: Detecting and Representing Objects using Holistic Models and Body Parts. %A Xianjie Chen %A Roozbeh Mottaghi %A Xiaobai Liu %A Sanja Fidler %A Raquel Urtasun %A Alan Yuille %X

Detecting objects becomes difficult when we need to deal with large shape deformation, occlusion and low resolution. We propose a novel approach to i) handle large deformations and partial occlusions in animals (as examples of highly deformable objects), ii) describe them in terms of body parts, and iii) detect them when their body parts are hard to detect (e.g., animals depicted at low resolution). We represent the holistic object and body parts separately and use a fully connected model to arrange templates for the holistic object and body parts. Our model automatically decouples the holistic object or body parts from the model when they are hard to detect. This enables us to represent a large number of holistic object and body part combinations to better deal with different “detectability” patterns caused by deformations, occlusion and/or low resolution.
We apply our method to the six animal categories in the PASCAL VOC dataset and show that our method significantly improves state-of-the-art (by 4.1% AP) and provides a richer representation for objects. During training we use annotations for body parts (e.g., head, torso, etc), making use of a new dataset of fully annotated object parts for PASCAL VOC 2010, which provides a mask for each part.

%8 06/2014 %1

arXiv:1406.2031

%2

http://hdl.handle.net/1721.1/100179