You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
GIMP-ML/gimp-plugins/DeblurGANv2
kritiksoman ccd3c980ca First upload 4 years ago
..
config First upload 4 years ago
models First upload 4 years ago
util First upload 4 years ago
README.md First upload 4 years ago
adversarial_trainer.py First upload 4 years ago
aug.py First upload 4 years ago
aug.pyc First upload 4 years ago
dataset.py First upload 4 years ago
metric_counter.py First upload 4 years ago
mymodel.pt First upload 4 years ago
predict.py First upload 4 years ago
predictorClass.py First upload 4 years ago
predictorClass.pyc First upload 4 years ago
requirements.txt First upload 4 years ago
schedulers.py First upload 4 years ago
test.sh First upload 4 years ago
test_aug.py First upload 4 years ago
test_dataset.py First upload 4 years ago
test_metrics.py First upload 4 years ago
testing.py First upload 4 years ago
train.py First upload 4 years ago

README.md

DeblurGAN-v2: Deblurring (Orders-of-Magnitude) Faster and Better

Code for this paper DeblurGAN-v2: Deblurring (Orders-of-Magnitude) Faster and Better

Orest Kupyn, Tetiana Martyniuk, Junru Wu, Zhangyang Wang

In ICCV 2019

Overview

We present a new end-to-end generative adversarial network (GAN) for single image motion deblurring, named DeblurGAN-v2, which considerably boosts state-of-the-art deblurring efficiency, quality, and flexibility. DeblurGAN-v2 is based on a relativistic conditional GAN with a double-scale discriminator. For the first time, we introduce the Feature Pyramid Network into deblurring, as a core building block in the generator of DeblurGAN-v2. It can flexibly work with a wide range of backbones, to navigate the balance between performance and efficiency. The plug-in of sophisticated backbones (e.g., Inception-ResNet-v2) can lead to solid state-of-the-art deblurring. Meanwhile, with light-weight backbones (e.g., MobileNet and its variants), DeblurGAN-v2 reaches 10-100 times faster than the nearest competitors, while maintaining close to state-of-the-art results, implying the option of real-time video deblurring. We demonstrate that DeblurGAN-v2 obtains very competitive performance on several popular benchmarks, in terms of deblurring quality (both objective and subjective), as well as efficiency. Besides, we show the architecture to be effective for general image restoration tasks too.

DeblurGAN-v2 Architecture

Datasets

The datasets for training can be downloaded via the links below:

Training

Command

python train.py

training script will load config under config/config.yaml

Tensorboard visualization

Testing

To test on a single image,

python predict.py IMAGE_NAME.jpg

By default, the name of the pretrained model used by Predictor is 'best_fpn.h5'. One can change it in the code ('weights_path' argument). It assumes that the fpn_inception backbone is used. If you want to try it with different backbone pretrain, please specify it also under ['model']['g_name'] in config/config.yaml.

Pre-trained models

Dataset G Model D Model Loss Type PSNR/ SSIM Link
GoPro Test Dataset InceptionResNet-v2 double_gan ragan-ls 29.55/ 0.934 https://drive.google.com/open?id=1UXcsRVW-6KF23_TNzxw-xC0SzaMfXOaR
MobileNet double_gan ragan-ls 28.17/ 0.925 https://drive.google.com/open?id=1JhnT4BBeKBBSLqTo6UsJ13HeBXevarrU
MobileNet-DSC double_gan ragan-ls 28.03/ 0.922

Parent Repository

The code was taken from https://github.com/KupynOrest/RestoreGAN . This repository contains flexible pipelines for different Image Restoration tasks.

Citation

If you use this code for your research, please cite our paper.

```
@InProceedings{Kupyn_2019_ICCV,
author = {Orest Kupyn and Tetiana Martyniuk and Junru Wu and Zhangyang Wang},
title = {DeblurGAN-v2: Deblurring (Orders-of-Magnitude) Faster and Better},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2019}
}
```