A cooperative research group from Google, Stanford, and Johns Hopkins has proposed “Auto-DeepLab,” a new method which utilizes hierarchical Neural Architecture Search (NAS) for semantic image segmentation. The project team includes top AI researchers Director of the Stanford Vision Lab Fei-Fei Li; and UCLA Center for Cognition, Vision, and Learning Director Alan Yuille.
Semantic image segmentation is an important a computer vision task that assigns a semantic label to every pixel in an image. Neural Architecture Search is a key AutoML process that has already been successfully used for other image classification tasks, and the team explored ways to extend NAS to dense image prediction problems. Existing methods usually focus on searching the cell structure and hand-designing an outer network structure. Researchers proposed searching the network level structure in addition to the cell level structure, as many more architectural variations for dense image prediction can be found at the network level.
Read the full article on Synched's website using hthe link below.