Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

[CVPRW2024] Repository for the paper "Orientation-conditioned Facial Texture Mapping for Video-based Facial Remote Photoplethysmography Estimation"

License

Notifications You must be signed in to change notification settings

csiro/orientation-uv-rppg

Open more actions menu

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Orientation UV rPPG

A self-contained Python package containing the video processing module similar to that used in the paper Orientation-conditioned Facial Texture Mapping for Video-based Facial Remote Photoplethysmography Estimation. For the full experimental code-base used to obtain the results in the paper please check out the experiments branch.

🔧 Installation

Prerequisites

  • Python 3.10 or higher
  • CUDA-compatible GPU (optional, but recommended for performance)

Install from GitHub

pip install git+https://github.com/csiro-internal/orientation-uv-rppg.git@package

💻 Quick Start

Basic Usage

The simplest way to use the package:

import torch
import orientation_uv_rppg as ouv

# Create video processor with custom parameters
processor = ouv.OrientationMaskedTextureSpaceVideoProcessor(
    min_detection_confidence=0.7,    # Higher confidence threshold
    min_tracking_confidence=0.8,     # More stable tracking
    device="cuda",                   # Use GPU acceleration
    output_size=128,                 # Higher resolution output
    degree_threshold=45.0            # Stricter orientation filtering
)

# Load your video frames
frames = torch.randn(200, 720, 1280, 3)  # HD video frames

# Process the video
result = processor(frames)
print(f"Input: {frames.shape}")
print(f"Output: {result.shape}")  # Should be [200, 128, 128, 3]

Please see the examples/ directory for usage examples and visualizations.

📜 Citation

If you find this useful please cite our work.

@inproceedings{cantrill2024orientationconditionedfacialtexturemapping,
      title={Orientation-conditioned Facial Texture Mapping for Video-based Facial Remote Photoplethysmography Estimation}, 
      author={Sam Cantrill and David Ahmedt-Aristizabal and Lars Petersson and Hanna Suominen and Mohammad Ali Armin},
      booktitle={Proceedings of the IEEE/CVF Computer Vision and Pattern Recognition Workshops}
      year={2024},
      url={https://openaccess.thecvf.com/content/CVPR2024W/CVPM/papers/Cantrill_Orientation-conditioned_Facial_Texture_Mapping_for_Video-based_Facial_Remote_Photoplethysmography_Estimation_CVPRW_2024_paper.pdf}, 
}

About

[CVPRW2024] Repository for the paper "Orientation-conditioned Facial Texture Mapping for Video-based Facial Remote Photoplethysmography Estimation"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
Morty Proxy This is a proxified and sanitized view of the page, visit original site.