office: |
Campus E1 4,
Saarland Informatics Campus 66123 Saarbrücken Germany |
email: |
sshimada [at] mpi-inf dot mpg dot de |
phone: | +49 681 9325-4055 |
fax: | +49 681 9325-4099 |
![]() |
Unbiased 4D: Monocular 4D Reconstruction with a Neural Deformation Model
E. Johnson, M. Habermann, S. Shimada, V. Golyanik and C. Theobalt
Accepted at Computer Vision and Pattern Recognition Workshop (CVPRW), 2023. [project page] [paper] |
![]() |
MoCapDeform: Monocular 3D Human Motion Capture in Deformable Scenes
Z. Li, S. Shimada, B. Schiele, C. Theobalt and V. Golyanik
Accepted at 3D Vision (3DV), 2022. (Best Student Paper Award) [project page] [paper] |
![]() |
HULC: 3D Human Motion Capture with Pose Manifold Sampling and Dense Contact Guidance S. Shimada, V. Golyanik, Z Li, P. Pérez, W. Xu, and C. Theobalt Accepted at European Conference on Computer Vision (ECCV), 2022. [project page] [paper] |
![]() |
UnrealEgo: A New Dataset for Robust Egocentric 3D Human Motion Capture H. Akada, J. Wang ,S. Shimada, M. Takahashi, C. Theobalt and V. Golyanik Accepted at European Conference on Computer Vision (ECCV), 2022. [project page] |
![]() |
Physical Inertial Poser (PIP): Physics-aware Real-time Human Motion Tracking from Sparse Inertial Sensors X. Yi, Y. Zhou, M. Habermann, S. Shimada, V. Golyanik, C. Theobalt, and F. Xu Accepted at Computer Vision and Pattern Recognition (CVPR), 2022. (Best Paper Finalist) [paper] [project page] |
![]() |
HandVoxNet++: 3D Hand Shape and Pose Estimation using Voxel-Based Neural Networks J. Malik S. Shimada, A. Elhayek, S. Ali, C. Theobalt, V. Golyanik and D. Stricker, Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021. We develop HandVoxNet++, i.e., a voxel-based deep network with 3D and graph convolutions trained in a fully supervised manner. Our method is ranked first on the HANDS19 challenge dataset (Task 1: Depth-Based 3D Hand Pose Estimation) at the moment of the submission of our results to the portal in August 2020. [paper] [project page] |
![]() |
Gravity-Aware 3D Human-Object Reconstruction R. Dabral, S. Shimada, A. Jain, C. Theobalt and V. Golyanik, In International Conference on Computer Vision (ICCV), 2021. This paper proposes GraviCap, i.e., a new approach for joint markerless 3D human motion capture and object trajectory estimation from monocular RGB videos. [paper] [project page] [code] [dataset] |
|
Neural Monocular 3D Human Motion Capture with Physical Awareness S. Shimada, V. Golyanik, W. Xu, P. Pérez and C. Theobalt. ACM Transactions on Graphic (Proc. of SIGGRAPH), 2021. We propose a monocular 3D motion capture algorithm that is aware of physical and environmental constraints. [paper] [project page] |
![]() |
PhysCap: Physically Plausible Monocular 3D Motion Capture in Real Time S. Shimada, V. Golyanik, W. Xu and C. Theobalt. ACM Transactions on Graphic (Proc. of SIGGRAPH Asia), 2020. We present PhysCap, the first algorithm for physically plausible, real-time and marker-less human 3D motion capture with a single colour camera at 25 fps. [paper] [project page] [arXiv] |
|
Fast Simultaneous Gravitational Alignment of Multiple Point Sets V. Golyanik, S. Shimada and C. Theobalt. In International Conference on 3D Vision (3DV), 2020 (Oral) This paper proposes a new resilient technique for simultaneous registration of multiple point sets by interpreting the latter as particle swarms rigidly moving in the mutually induced force fields. [paper] [project page] |
![]() |
HandVoxNet: Deep Voxel-Based Network for 3D Hand Shape and Pose Estimation from a Single Depth Map J. Malik, I. Abdelaziz, A. Elhayek, S. Shimada, S. A. Ali, V. Golyanik, C. Theobalt and D. Stricker. Accepted in Computer Vision and Pattern Recognition (CVPR), 2020. we propose a novel architecture with 3D convolutions trained in a weakly-supervised manner. We combine the advantages of voxel and surface representations by registering the hand surface to the voxelized hand shape. [paper] [supplement] [project page] [arXiv] |
![]() |
DispVoxNets: Non-Rigid Point Set Alignment with Supervised Learning Proxies. S. Shimada, V. Golyanik, E. Tretschk, D. Stricker and C. Theobalt. In International Conference on 3D Vision (3DV), 2019; Oral We introduce a supervised-learning framework for nonrigid point set alignment of a new kind — Displacements on Voxels Networks (DispVoxNets) — which abstracts away from the point set representation and regresses 3D displacement fields on regularly sampled proxy 3D voxel grids. [paper] [poster] [project page] [arXiv] |
![]() |
IsMo-GAN: Adversarial Learning for Monocular Non-Rigid 3D Reconstruction. S. Shimada, V. Golyanik, C. Theobalt and D. Stricker. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2019; Oral We present the Isometry-Aware Monocular Generative Adversarial Network (IsMo-GAN) — an approach for direct 3D reconstruction from a single image, trained for the deformation model in an adversarial manner on a light-weight synthetic dataset. [paper] [code] [arXiv] |
![]() |
HDM-Net: Monocular Non-Rigid 3D Reconstruction with Learned Deformation Model. V. Golyanik, S. Shimada, K. Varanasi and D. Stricker. In International Conference on Virtual Reality and Augmented Reality (EuroVR) 2018; Oral (Long Paper) We propose a new hybrid approach for monocular non-rigid reconstruction which we call Hybrid Deformation Model Network (HDM-Net). In our approach, a deformation model is learned by a deep neural network, with a combination of domainspecific loss functions. [paper] [HDM-Net data set] |