Jump to content

Object co-segmentation

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Bikingdog (talk | contribs) at 01:41, 7 September 2018. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Example video frames and their object co-segmentation annotations (ground truth) in the Noisy-ViDiSeg[1] dataset. Object segments are depicted by the red edge.

In computer vision, object co-segmentation is a special case of image segmentation, which is defined as jointly segmenting semantically similar objects in multiple images or video frames[2][3]. The problem of separating out a foreground object from the background across all frames of a video is known as video object segmentation. The goal is to label each pixel in all video frames according to whether it belongs to the unknown target object or not. The resulting segmentation is a spatio-temporal object tube delineating the boundaries of the object throughout a video. Such capacity can be useful for a variety of computer vision tasks, such as object centric video summarization, action analysis, video surveillance, and content-based video retrieval.

Challenges

It is often challenging extract segmentation masks of a target/object from a noisy collection of images or video frames, which involves object discovery coupled with segmentation. A noisy collection implies that the object/target is present sporadically in a set of images or the object/target disappears intermittently throughout the video of interest. Most methods either only fulfill video object discovery, or video object segmentation presuming the existence of the object in all video frames. The unrealistically optimistic assumption is often made in these methods, that the target object is present in all (or most) video frames. Therefore, methods robust to a large number of noisy frames (i.e., irrelevant frames devoid of the target object) are urgently needed.

Moreover, most of existing methods emphasized on leveraging the low-level features (i.e., color and motion) or contextual information shared among individual or consecutive frames to find the common regions, and simply employed the short-term motion (e.g., optical flow) between consecutive frames to smooth the spatio-temporal segmentation. Therefore, they often encountered difficulties when the objects exhibit large variations in appearance, motion, size, pose, and viewpoint.

Furthermore, several methods[4],[5],[6],[7],[8],[9] employed the mid-level representation of objects (i.e., object proposals[10]) as an additional cue to facilitate the segmentation of the object, with object discovery and object segmentation conveniently isolated as two independent tasks and performed in a two-step manner[11]}. Unfortunately, the disregard of their dependencies often leads to sub-optimal performances, e.g., object segmentation dramatically failing at focusing on the target, object discovery providing wildly inaccurate object proposals.


Dynamic Markov Networks-based Methods

The inference process of the two coupled dynamic Markov networks to obtain the joint video object discovery and segmentation[1]
A joint object discover and co-segmentation framework based on coupled dynamic Markov Networks[1]

A joint object discover and co-segmentation method based on coupled dynamic Markov Networks has been proposed recently[1], which claims significant improvements in robustness against irrelevant/noisy video frames. A principled probabilistic model is introduced with two coupled dynamic Markov networks, one for discovery and the other for segmentation. When conducting the Bayesian inference on this model using belief propagation, the bi-directional propagation of the beliefs of the object's posteriors on an object proposal graph and a superpixel graph reveals a clear collaboration between these two inference tasks. More specifically, object discovery is conducted through the object proposal graph representing the correlations of object proposals among multiple frames, which is built under the help of the spatio-temporal object segmentation tube obtained by object segmentation on the superpixel graph. Object segmentation is achieved on the superpixel graph representing the connections of superpixels, which is benefited from the spatio-temporal object proposal tube generated by object discovery through the object proposal graph.


Given a video with a significant number of noisy frames, our goal is to jointly find an object discovery labeling and an object segmentation labeling from . is a spatio-temporal region (object) proposal tube of . is the object discovery label of each frame , where and , i.e., no more than one region proposal among all the proposals in will be identified as the object. is a spatio-temporal object segmentation tube of . is the object segmentation label of , where denotes that each of the superpixels either belongs to the object () or the background (). The image observations associated with , , , and are denoted by , , and , respectively. and are the representations of a region proposal and a superpixel, respectively.


Specifically, the beneficial information are encouraged to be propagated between the joint inference of and , and hence video object discovery and video object segmentation can naturally benefit each other. A Markov network is employed to characterize the joint object discovery and segmentation from . The undirected link represents the mutual influence of object discovery and object segmentation, and is associated with a potential compatibility function . The directed links represent the image observation processes, and are associated with two image likelihood functions and . According to the Bayesian rule, it is easy to obtain where is a normalization constant. The above Markov network is a generative model at one time instant.


When putting the above Markov network into temporal context by accommodating dynamic models, we construct two coupled dynamic Markov networks. The subscript represents the time index. In addition, we denote the collective image observations associated with the object discovery labels from the beginning to by , and reversely from the end to by . The collective image observations associated with the object segmentation labels are built in the same way, i.e., and . In this formulation, the problem of joint video object discovery and segmentation from a single noisy video is to perform Bayesian inference of the dynamic Markov networks to obtain the marginal posterior probabilities and .


See also

References

  1. ^ a b c d Liu, Ziyi; Wang, Le; Hua, Gang; Zhang, Qilin; Niu, Zhenxing; Wu, Ying; Zheng, Nanning (2018). "Joint Video Object Discovery and Segmentation by Coupled Dynamic Markov Networks" (PDF). IEEE Transactions on Image Processing. 27 (12): 5840–5853. doi:10.1109/tip.2018.2859622. ISSN 1057-7149.
  2. ^ Vicente, Sara; Rother, Carsten; Kolmogorov, Vladimir (2011). Object cosegmentation. IEEE. doi:10.1109/cvpr.2011.5995530. ISBN 978-1-4577-0394-2.
  3. ^ Chen, Ding-Jie; Chen, Hwann-Tzong; Chang, Long-Wen (2012). Video object cosegmentation. New York, New York, USA: ACM Press. doi:10.1145/2393347.2396317. ISBN 978-1-4503-1089-5.
  4. ^ Lee, Yong Jae; Kim, Jaechul; Grauman, Kristen (2011). Key-segments for video object segmentation. IEEE. doi:10.1109/iccv.2011.6126471. ISBN 978-1-4577-1102-2.
  5. ^ Ma, Tianyang; Latecki, Longin Jan. Maximum weight cliques with mutex constraints for video object segmentation. IEEE CVPR 2012. doi:10.1109/CVPR.2012.6247735.
  6. ^ Zhang, Dong; Javed, Omar; Shah, Mubarak (2013). Video Object Segmentation through Spatially Accurate and Temporally Dense Extraction of Primary Object Regions. IEEE. doi:10.1109/cvpr.2013.87. ISBN 978-0-7695-4989-7.
  7. ^ Fragkiadaki, Katerina; Arbelaez, Pablo; Felsen, Panna; Malik, Jitendra (2015). Learning to segment moving objects in videos. IEEE. doi:10.1109/cvpr.2015.7299035. ISBN 978-1-4673-6964-0.
  8. ^ Perazzi, Federico; Wang, Oliver; Gross, Markus; Sorkine-Hornung, Alexander (2015). Fully Connected Object Proposals for Video Segmentation. IEEE. doi:10.1109/iccv.2015.369. ISBN 978-1-4673-8391-2.
  9. ^ Koh, Yeong Jun; Kim, Chang-Su (2017). Primary Object Segmentation in Videos Based on Region Augmentation and Reduction. IEEE. doi:10.1109/cvpr.2017.784. ISBN 978-1-5386-0457-1.
  10. ^ Krähenbühl, Philipp; Koltun, Vladlen (2014). "Geodesic Object Proposals". Computer Vision – ECCV 2014. Cham: Springer International Publishing. pp. 725–739. doi:10.1007/978-3-319-10602-1_47. ISBN 978-3-319-10601-4. ISSN 0302-9743.
  11. ^ Xue, Jianru; Wang, Le; Zheng, Nanning; Hua, Gang (2013). "Automatic salient object extraction with contextual cue and its applications to recognition and alpha matting". Pattern Recognition. 46 (11). Elsevier BV: 2874–2889. doi:10.1016/j.patcog.2013.03.028. ISSN 0031-3203.