Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

tensorlayer/SRGAN

Open more actions menu

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Super Resolution Examples

SRGAN Architecture

Prepare Data and Pre-trained VGG

    1. You need to download the pretrained VGG19 model weights in here.
    1. You need to have the high resolution images for training.
    • In this experiment, I used images from DIV2K - bicubic downscaling x4 competition, so the hyper-paremeters in config.py (like number of epochs) are seleted basic on that dataset, if you change a larger dataset you can reduce the number of epochs.
    • If you dont want to use DIV2K dataset, you can also use Yahoo MirFlickr25k, just simply download it using train_hr_imgs = tl.files.load_flickr25k_dataset(tag=None) in main.py.
    • If you want to use your own images, you can set the path to your image folder via config.TRAIN.hr_img_path in config.py.

Run

🔥🔥🔥🔥🔥🔥 You need install TensorLayerX at first!

🔥🔥🔥🔥🔥🔥 Please install TensorLayerX via source

pip install git+https://github.com/tensorlayer/tensorlayerx.git 

Train

config.TRAIN.img_path = "your_image_folder/"

Your directory structure should look like this:

srgan/
    └── config.py
    └── srgan.py
    └── train.py
    └── vgg.py
    └── model
          └── vgg19.npy
    └── DIV2K
          └── DIV2K_train_HR
          ├── DIV2K_train_LR_bicubic
          ├── DIV2K_valid_HR
          └── DIV2K_valid_LR_bicubic

  • Start training.
python train.py

🔥Modify a line of code in train.py, easily switch to any framework!

import os
os.environ['TL_BACKEND'] = 'tensorflow'
# os.environ['TL_BACKEND'] = 'mindspore'
# os.environ['TL_BACKEND'] = 'paddle'
# os.environ['TL_BACKEND'] = 'pytorch'

🚧 We will support PyTorch as Backend soon.

Evaluation.

🔥 We have trained SRGAN on DIV2K dataset. 🔥 Download model weights as follows.

SRGAN_g SRGAN_d
TensorFlow Baidu, Googledrive Baidu, Googledrive
PaddlePaddle Baidu, Googledrive Baidu, Googledrive
MindSpore 🚧Coming soon! 🚧Coming soon!
PyTorch 🚧Coming soon! 🚧Coming soon!

Download weights file and put weights under the folder srgan/models/.

Your directory structure should look like this:

srgan/
    └── config.py
    └── srgan.py
    └── train.py
    └── vgg.py
    └── model
          └── vgg19.npy
    └── DIV2K
          ├── DIV2K_train_HR
          ├── DIV2K_train_LR_bicubic
          ├── DIV2K_valid_HR
          └── DIV2K_valid_LR_bicubic
    └── models
          ├── g.npz  # You should rename the weigths file. 
          └── d.npz  # If you set os.environ['TL_BACKEND'] = 'tensorflow',you should rename srgan-g-tensorflow.npz to g.npz .

  • Start evaluation.
python train.py --mode=eval

Results will be saved under the folder srgan/samples/.

Results

Reference

Citation

If you find this project useful, we would be grateful if you cite the TensorLayer paper:

@article{tensorlayer2017,
author = {Dong, Hao and Supratak, Akara and Mai, Luo and Liu, Fangde and Oehmichen, Axel and Yu, Simiao and Guo, Yike},
journal = {ACM Multimedia},
title = {{TensorLayer: A Versatile Library for Efficient Deep Learning Development}},
url = {http://tensorlayer.org},
year = {2017}
}

@inproceedings{tensorlayer2021,
  title={TensorLayer 3.0: A Deep Learning Library Compatible With Multiple Backends},
  author={Lai, Cheng and Han, Jiarong and Dong, Hao},
  booktitle={2021 IEEE International Conference on Multimedia \& Expo Workshops (ICMEW)},
  pages={1--3},
  year={2021},
  organization={IEEE}
}

Other Projects

Discussion

License

About

Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published

Contributors 7

Languages

Morty Proxy This is a proxified and sanitized view of the page, visit original site.