Partially Occluded Hands: A challenging new dataset for single-image hand pose estimation

TitlePartially Occluded Hands: A challenging new dataset for single-image hand pose estimation
Publication TypeConference Paper
Year of Publication2018
AuthorsMyanganbayar, B, Mata, C, Dekel, G, Katz, B, Ben-Yosef, G, Barbu, A
Conference NameAsian Conference for Computer Vision (ACCV)
Date Published12/2018
Keywordsdataset, Partial occlusion, RGB hand-pose reconstruction

Recognizing the pose of hands matters most when hands are interacting with other objects. To understand how well both machines and humans perform on single-image 2D hand-pose reconstruction from RGB images, we collected a challenging dataset of hands interacting with 148 objects. We used a novel methodology that provides the same hand in the same pose both with the object being present and occluding the hand and without the object occluding the hand. Additionally, we collected a wide range of grasps for each object designing the data collection methodology to ensure this diversity. Using this dataset we measured the performance of two state-of-the-art hand-pose recognition methods showing that both are extremely brittle when faced with even light occlusion from an object. This is not evident in previous datasets because they often avoid hand- object occlusions and because they are collected from videos where hands are often between objects and mostly unoccluded. We annotated a subset of the dataset and used that to show that humans are robust with respect to occlusion, and also to characterize human hand perception, the space of grasps that seem to be considered, and the accuracy of reconstructing occluded portions of hands. We expect that such data will be of interest to both the vision community for developing more robust hand-pose algorithms and to the robotic grasp planning community for learning such grasps. The dataset is available at

Research Area: 

CBMM Relationship: 

  • CBMM Funded