Scene-aware Egocentric 3D Human Pose Estimation

1 Max Planck Institute for Informatics, Saarland Informatics Campus    2 Meta Reality Labs 3 Google
CVPR 2023

Abstract

Egocentric 3D human pose estimation with a single head-mounted fisheye camera has recently attracted attention due to its numerous applications in virtual and augmented reality. Existing methods still struggle in challenging poses where the human body is highly occluded or is closely interacting with the scene. To address this issue, we propose a scene-aware egocentric pose estimation method that guides the prediction of the egocentric pose with scene constraints. To this end, we propose an egocentric depth estimation network to predict the scene depth map from a wide-view egocentric fisheye camera while mitigating the occlusion of the human body with a depth-inpainting network. Next, we propose a scene-aware pose estimation network that projects the 2D image features and estimated depth map of the scene into a voxel space and regresses the 3D pose with a V2V network. The voxel-based feature representation provides the direct geometric connection between 2D image features and scene geometry, and further facilitates the V2V network to constrain the predicted pose based on the estimated scene geometry. To enable the training of the aforementioned networks, we also generated a synthetic dataset, called EgoGTA, and an in-the-wild dataset based on EgoPW, called EgoPW-Scene. The experimental results of our new evaluation sequences show that the predicted 3D egocentric poses are accurate and physically plausible in terms of human-scene interaction, demonstrating that our method outperforms the state-of-the-art methods both quantitatively and qualitatively.

Downloads


  • Paper

  • Suppl. Mat.

  • Code

  • Data License

  • SceneEgo Dataset (test split)



  • SceneEgo Dataset (train split)

  • EgoGTA

  • EgoPW-Scene

Citation

BibTeX, 1 KB

@article{wang2023scene,
  title={Scene-aware Egocentric 3D Human Pose Estimation},
  author={Wang, Jian and Luvizon, Diogo and Xu, Weipeng and Liu, Lingjie and Sarkar, Kripasindhu and Theobalt, Christian},
  journal={CVPR},
  year={2023}
}
				

Acknowledgments

Jian Wang, Diogo Luvizon, Lingjie Liu and Christian Theobalt have been supported by the ERC Consolidator Grant 4DReply (770784).

Contact

For questions, clarifications, please get in touch with:
Jian Wang jianwang@mpi-inf.mpg.de
Lingjie Liu lliu@mpi-inf.mpg.de

This page is Zotero translator friendly. Page last updated Imprint. Data Protection.