Official Pytorch implementation for the "Event-Enhanced Blurry Video Super-Resolution" paper (AAAI 2025).
Authors: Dachun Kai📧️, Yueyi Zhang, Jin Wang, Zeyu Xiao, Zhiwei Xiong, Xiaoyan Sun, University of Science and Technology of China
Feel free to ask questions. If our work helps, please don't hesitate to give us a ⭐!
- 2025/04/17: Release pretrained models and test sets for quick testing
- 2025/01/07: Video demos released
- 2024/12/15: Initialize the repository
- 2024/12/09: 🎉 🎉 Our paper was accepted in AAAI'2025
A
NCER_traffic_sign.mp4
NCER_building.mp4
GoPro_car.mp4
GoPro_park.mp4
-
Dependencies: Miniconda, CUDA Toolkit 11.1.1, torch 1.10.2+cu111, and torchvision 0.11.3+cu111.
-
Run in Conda (Recommended)
conda create -y -n ev-deblurvsr python=3.7 conda activate ev-deblurvsr pip install torch-1.10.2+cu111-cp37-cp37m-linux_x86_64.whl pip install torchvision-0.11.3+cu111-cp37-cp37m-linux_x86_64.whl git clone https://github.com/DachunKai/Ev-DeblurVSR cd Ev-DeblurVSR && pip install -r requirements.txt && python setup.py develop
-
Run in Docker 👏
Note: We use the same docker image as our previous work EvTexture.
[Option 1] Directly pull the published Docker image we have provided from Alibaba Cloud.
docker pull registry.cn-hangzhou.aliyuncs.com/dachunkai/evtexture:latest
[Option 2] We also provide a Dockerfile that you can use to build the image yourself.
cd EvTexture && docker build -t evtexture ./docker
The pulled or self-built Docker image contains a complete conda environment named
evtexture. After running the image, you can mount your data and operate within this environment.source activate evtexture && cd EvTexture && python setup.py develop
-
Download the pretrained models from (Releases / Baidu Cloud (n8hg)) and place them to
experiments/pretrained_models/EvDeblurVSR/. The network architecture code is in evdeblurvsr_arch.py.- Synthetic dataset model:
- Real-world dataset model:
-
EvDeblurVSR_NCER_BIx4.pth: trained on NCER dataset with Blur-Sharp pairs and BI degradation for
$4\times$ SR scale.
-
EvDeblurVSR_NCER_BIx4.pth: trained on NCER dataset with Blur-Sharp pairs and BI degradation for
-
Download the preprocessed test sets (including events) for GoPro, BSD, and NCER from (Baidu Cloud (n8hg) / Google Drive), and place them to
datasets/.-
GoPro_h5: HDF5 files containing preprocessed test datasets for the GoPro test set.
-
BSD_h5: HDF5 files containing preprocessed test datasets for the BSD dataset.
-
NCER_h5: HDF5 files containing preprocessed test datasets for the NCER dataset.
-
-
Run the following command:
- Test on GoPro for 4x Blurry VSR:
./scripts/dist_test.sh [num_gpus] options/test/EvDeblurVSR/test_EvDeblurVSR_GoPro_x4.yml
- Test on BSD for 4x Blurry VSR:
./scripts/dist_test.sh [num_gpus] options/test/EvDeblurVSR/test_EvDeblurVSR_BSD_x4.yml
- Test on NCER for 4x Blurry VSR:
./scripts/dist_test.sh [num_gpus] options/test/EvDeblurVSR/test_EvDeblurVSR_NCER_x4.yml
This will generate the inference results in
results/. The output results on GoPro, BSD and NCER datasets can be downloaded from (Releases / Baidu Cloud (n8hg)). - Test on GoPro for 4x Blurry VSR:
-
Test the number of parameters, runtime, and FLOPs:
python test_scripts/test_params_runtime.py
-
Both video and event data are required as input, as shown in the snippet. We package each video and its event data into an HDF5 file.
-
Example: The structure of
GOPR0384_11_00.h5file from the GoPro dataset is shown below.GOPR0384_11_00.h5 ├── images │ ├── 000000 # frame, ndarray, [H, W, C] │ ├── ... ├── vFwd │ ├── 000000 # inter-frame forward event voxel, ndarray, [Bins, H, W] │ ├── ... ├── vBwd │ ├── 000000 # inter-frame backward event voxel, ndarray, [Bins, H, W] │ ├── ... ├── vExpo │ ├── 000000 # intra-frame exposure event voxel, ndarray, [Bins, H, W] │ ├── ...
If you find the code and pre-trained models useful for your research, please consider citing our paper. 😃
@inproceedings{kai2025event,
title={Event-{E}nhanced {B}lurry {V}ideo {S}uper-{R}esolution},
author={Kai, Dachun and Zhang, Yueyi and Wang, Jin and Xiao, Zeyu and Xiong, Zhiwei and Sun, Xiaoyan},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={39},
number={4},
pages={4175--4183},
year={2025}
}
If you meet any problems, please describe them in issues or contact:
- Dachun Kai: dachunkai@mail.ustc.edu.cn
This project is released under the Apache 2.0 License. Our work builds significantly upon our previous project EvTexture. We would also like to sincerely thank the developers of BasicSR, an open-source toolbox for image and video restoration tasks. Additionally, we appreciate the inspiration and code provided by BasicVSR++, RAFT and event_utils.