Monocular Real-time Hand Shape and Motion Capture using Multi-modal Data

Download Video: HD (MP4, 97 MB)

Abstract

We present a novel method for monocular hand shape and pose estimation at unprecedented runtime performance of 100fps and at state-of-the-art accuracy. This is enabled by a new learning based architecture designed such that it can make use of all the sources of available hand training data: image data with either 2D or 3D annotations, as well as stand-alone 3D animations without corresponding image data. It features a 3D hand joint detection module and an inverse kinematics module which regresses not only 3D joint positions but also maps them to joint rotations in a single feed-forward pass. This output makes the method more directly usable for applications in computer vision and graphics compared to only regressing 3D joint positions. We demonstrate that our architectural design leads to a significant quantitative and qualitative improvement over the state of the art on several challenging benchmarks.

Downloads


  • Paper (9 MB)

  • Supplementary Doc (4 MB)

  • Video (97 MB)


  • GitHub


Citation


@inproceedings{zhou2019monocular,
  title={Monocular Real-time Hand Shape and Motion Capture using Multi-modal Data},
  author={Zhou, Yuxiao and Habermann, Marc and Xu, Weipeng and Habibie, Ikhsanul and Theobalt, Christian and Xu, Feng},
  booktitle={{IEEE} Conference on Computer Vision and Pattern Recognition (CVPR)},
  pages={0--0},
  year={2020}
}
				

Contact

For questions, clarifications, please get in touch with:
Feng Xu feng-xu@tsinghua.edu.cn
Yuxiao Zhou zhou-yx19@mails.tsinghua.edu.cn

This page is Zotero translator friendly. Page last updated Imprint. Data Protection.