office: |
Campus E1 4,
Room 115F Saarland Informatics Campus 66123 Saarbrücken Germany |
---|---|
email: |
Get my email address via email |
phone: | +49 681 9325 4540 |
fax: | +49 681 9325 4099 |
![]() |
Scene-aware Egocentric 3D Human Pose Estimation
J. Wang, D. Luvizon, W. Xu, L. Liu, K. Sarkar, and C. Theobalt
To appear in CVPR 2023.
Description:
In this work, we propose to estimate egocentric human pose guided by scene constraints. We devise a new egocentric scene depth estimation network from a wide-view egocentric fisheye camera that estimates the depth behind the human with a depth-inpainting network. Our pose estimation model projects 2D image features and estimated scene depth into a common voxel space and regresses the 3D pose with a V2V network. We also generated a synthetic dataset, EgoGTA, and an in-the-wild dataset based on EgoPW, EgoPW-Scene.
[arXiv]
[project page]
|
![]() |
Scene-Aware 3D Multi-Human Motion Capture from a Single Camera
D. Luvizon, M. Habermann, V. Golyanik, A. Kortylewski and C. Theobalt
To appear in Eurographics 2023.
Description:
We introduce the first non-linear optimization-based approach that jointly solves for the absolute 3D position of each human, their articulated pose, their individual shapes as well as the scale of the scene. Given the per-frame 3D estimates of the humans and scene point-cloud, we perform a space-time coherent optimization over the video to ensure temporal, spatial and physical plausibility. We consistently outperform previous methods and we qualitatively demonstrate that our method is robust to in-the-wild conditions including challenging scenes with people of different sizes.
[arXiv]
[project page]
[source code]
|
![]() |
HandFlow: Quantifying View-Dependent 3D Ambiguity in Two-Hand Reconstruction with Normalizing Flow
J. Wang, D. Luvizon, F. Mueller, F. Bernard, A. Kortylewski, D. Casas, C. Theobalt
VMV 2022, Best Paper Honorable Mention
Description: This work presents the first probabilistic method to estimate a distribution of plausible two-hand poses given a monocular RGB input. It quantitatively shows that existing deterministic methods are not suited for this ambiguous task. In this work, we demonstrate the quality of our probabilistic reconstruction and show that explicit ambiguity modeling is better-suited for this challenging problem
[arXiv]
[project page]
|
![]() |
Estimating Egocentric 3D Human Pose in the Wild with External Weak Supervision
J. Wang, L. Liu, W. Xu, K. Sarkar, D. Luvizon, C. Theobalt
CVPR 2022
Description: We present a new egocentric pose estimation method that can be trained with weak external supervision. To facilitate the network training, we propose a novel learning strategy to supervise the egocentric features with high-quality features extracted by a pretrained external-view pose estimation model. We also collected a large-scale in-the-wild egocentric dataset called Egocentric Poses in the Wild (EgoPW) with a head-mounted fisheye camera and an auxiliary external camera, which provides additional observation of the human body from a third-person perspective.
[paper]
[arXiv]
[project page]
[data]
|
![]() |
Adaptive multiplane image generation from a single internet picture
Diogo C. Luvizon, Gustavo Sutter P Carvalho, Andreza A. dos Santos, Jhonatas S. Conceicao, Jose L. Flores-Campana, Luis G.L. Decker, Marcos R. Souza, Helio Pedrini, Antonio Joia, Otavio A.B. Penatti
WACV 2021, CVPR 2021 Workshop Learning to Generate 3D Shapes and Scenes
Description: In this paper, we address the problem of generating an efficient multiplane image (MPI) from a single high-resolution picture. We present the adaptive-MPI representation, which allows rendering novel views with low computational requirements. To this end, we propose an adaptive slicing algorithm that produces an MPI with variable number of image planes. We also present a new lightweight CNN for depth estimation, which is learned by knowledge distillation from a larger network. Occluded regions in the adaptive-MPI are inpainted also by a lightweight CNN.Our method is capable of producing high-quality predictions with one order of magnitude less parameters, when compared to previous approaches.
[paper]
[arXiv]
[CVPRW'21 link]
|