A self-contained Python package containing the video processing module similar to that used in the paper Orientation-conditioned Facial Texture Mapping for Video-based Facial Remote Photoplethysmography Estimation. For the full experimental code-base used to obtain the results in the paper please check out the experiments
branch.
- Python 3.10 or higher
- CUDA-compatible GPU (optional, but recommended for performance)
pip install git+https://github.com/csiro-internal/orientation-uv-rppg.git@package
The simplest way to use the package:
import torch
import orientation_uv_rppg as ouv
# Create video processor with custom parameters
processor = ouv.OrientationMaskedTextureSpaceVideoProcessor(
min_detection_confidence=0.7, # Higher confidence threshold
min_tracking_confidence=0.8, # More stable tracking
device="cuda", # Use GPU acceleration
output_size=128, # Higher resolution output
degree_threshold=45.0 # Stricter orientation filtering
)
# Load your video frames
frames = torch.randn(200, 720, 1280, 3) # HD video frames
# Process the video
result = processor(frames)
print(f"Input: {frames.shape}")
print(f"Output: {result.shape}") # Should be [200, 128, 128, 3]
Please see the examples/
directory for usage examples and visualizations.
If you find this useful please cite our work.
@inproceedings{cantrill2024orientationconditionedfacialtexturemapping,
title={Orientation-conditioned Facial Texture Mapping for Video-based Facial Remote Photoplethysmography Estimation},
author={Sam Cantrill and David Ahmedt-Aristizabal and Lars Petersson and Hanna Suominen and Mohammad Ali Armin},
booktitle={Proceedings of the IEEE/CVF Computer Vision and Pattern Recognition Workshops}
year={2024},
url={https://openaccess.thecvf.com/content/CVPR2024W/CVPM/papers/Cantrill_Orientation-conditioned_Facial_Texture_Mapping_for_Video-based_Facial_Remote_Photoplethysmography_Estimation_CVPRW_2024_paper.pdf},
}