Mirror-Aware Neural Humans 🏃🏻🪞 3DV 2024
- Daniel Ajisafe The University of British Columbia
- James Tang The University of British Columbia
- Shih-Yang Su The University of British Columbia
- Bastian Wandt Linköping University
- Helge Rhodin The University of British Columbia
Abstract
Human motion capture either requires multi-camera systems or is unreliable using single-view input due to depth ambiguities. Meanwhile, mirrors are readily available in urban environments and form an affordable alternative by recording two views with only a single camera. However, the mirror setting poses the additional challenge of handling occlusions of real and mirror image. Going beyond existing mirror approaches for 3D human pose estimation, we utilize mirrors for learning a complete body model, including shape and dense appearance. Our main contributions are extending articulated neural radiance fields to include a notion of a mirror, making it sample-efficient over potential occlusion regions. Together, our contributions realize a consumer-level 3D motion capture system that starts from off-the-shelf 2D poses by automatically calibrating the camera, estimating mirror orientation, and subsequently lifting 2D keypoint detections to 3D skeleton pose that is used to condition the mirror-aware NeRF. We empirically demonstrate the benefit of learning a body model and accounting for occlusion in challenging mirror scenes.
Video
Overview
Citation
Datasets
MirrorHuman-eval dataset
Human3.6M
Acknowledgements
We are very grateful for the helpful comments from Kosta Derpanis on an earlier draft of the paper, as well as insightful feedback from current and past members of the Visual AI Lab team.
We extend gratitude to Frank Yu for graciously assisting with data pre-processing for our baseline evaluation.
We also say thank you to the Advanced Research Computing at the University of British Columbia and Compute Canada for providing computational resources.
Awesome thanks to generous performers (Charissa Hoo, U Limn, Eclipse, Onken and Bobylien) who gave consent to use their videos for our research.
Music credits to Oleg Fedak via Pixabay.
The website template was borrowed from Michaël Gharbi.