Estimating Egocentric 3D Human Pose in the Wild with External Weak Supervision

CVPR

Abstract

Egocentric 3D human pose estimation with a single fisheye camera has drawn a significant amount of attention recently. However, existing methods struggle with pose estimation from in-the-wild images, because they can only be trained on synthetic data due to the unavailability of large-scale in-the-wild egocentric datasets. Furthermore, these methods easily fail when the body parts are occluded by or interacting with the surrounding scene. To address the shortage of in-the-wild data, we collect a large-scale in-the-wild egocentric dataset called Egocentric Poses in the Wild (EgoPW). This dataset is captured by a head-mounted fisheye camera and an auxiliary external camera, which provides an additional observation of the human body from a third-person perspective during training. We present a new egocentric pose estimation method, which can be trained on the new dataset with weak external supervision. Specifically, we first generate pseudo labels for the EgoPW dataset with a spatio-temporal optimization method by incorporating the external-view supervision. The pseudo labels are then used to train an egocentric pose estimation network. To facilitate the network training, we propose a novel learning strategy to supervise the egocentric features with the high-quality features extracted by a pretrained external-view pose estimation model. The experiments show that our method predicts accurate 3D poses from a single in-the-wild egocentric image and outperforms the state-of-the-art methods both quantitatively and qualitatively.

Downloads


  • Paper

  • Suppl. Mat.

  • Video


  • Data License

  • Dataset


Citation

BibTeX, 1 KB

@article{wang2022estimating,
  title={Estimating Egocentric 3D Human Pose in the Wild with External Weak Supervision},
  author={Wang, Jian and Liu, Lingjie and Xu, Weipeng and Sarkar, Kripasindhu and Luvizon, Diogo and Theobalt, Christian},
  journal={CVPR},
  year={2022}
}
				

Acknowledgments

Jian Wang, Kripasindhu Sarkar and Christian Theobalt have been supported by the ERC Consolidator Grant 4DReply (770784) and Lingjie Liu has been supported by Lise Meitner Postdoctoral Fellowship. We also thank Gereon Fox for his help with the narration on the supplementary video.

Contact

For questions, clarifications, please get in touch with:
Jian Wang jianwang@mpi-inf.mpg.de
Lingjie Liu lliu@mpi-inf.mpg.de

This page is Zotero translator friendly. Page last updated Imprint. Data Protection.