[go: up one dir, main page]

Skip to content

cliffordkleinsr/DE-SRFREN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

96 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DE-SRFREN

Video Restoration Processing Pipeline

We've released a public colab notebook for use! use the link below to try:

Open In Colab

As the name suggests, this is a video restoration pipeline pulling from various cutting-edge technologies and merging them to create one processing pipeline, for videos, to rule them all. The pipeline borrows from multiple AI techniques from different contributors, these techniques are mentioned in our releases page. If you like our project please give us a star and also don't forget to like the other projects used by the video restoration pipeline 🤠

NOTE Only one video at a time!

Installation

Setting up the environment

# Make sure you have git installed
git clone https://github.com/cliffordkleinsr/DE-SRFREN.git
cd DE-SRFREN/v0.0.3
# Make sure you have Python and PyTorch installed -.-"
# Install basicsr 
pip install basicsr 
# Install facexlib 
# We use face detection and face restoration helper in the facexlib package
pip install facexlib #parsing path net and ResNet faces
pip install realesrgan  
pip install gfpgan
pip install -r requirements.txt

As a side note, make sure you have Pytorch compiled with Cuda binaries installed otherwise inference speed will be greatly impacted

USAGE


  • Basic argument structure:
-i or --input, your input video directory
-o or --video_output, your video output
-n, model name
--ffmpeg_bin, path to ffmpeg.exe
--ffprobe_bin, path to fprobe.exe
--batch, ability to batch images
--batches, num batches default is 4
-h or --help, for help with arguments

Note The arguments --ffmpef_bin and --ffprobe_bin should only be used if you have not specified the 'ffmpeg binaries' in your environment variables. Batched inference (controlled by --batch parameter, default is 4). lower is better but not <=1

  • For quick inference on Windows

Use if ffmpeg is not installed to path

python inference.py -i inputs/your_video.mp4 --ffmpeg_bin ffmpeg/bin/ffmpeg.exe --ffprobe_bin ffmpeg/bin/ffprobe.exe --face_enhance --suffix outx2 

Note: face_enhancer only works with videos of real people, If you are working with anime/animation (cartoon) characters, use:

python inference.py -i inputs/your_anime_video.mp4 --ffmpeg_bin ffmpeg/bin/ffmpeg.exe --ffprobe_bin ffmpeg/bin/ffprobe.exe -n realesr-animevideov3 --suffix outx2

Use if ffmpeg is installed to windows environment path

python inference.py -i inputs/your_video.mp4 --face_enhance --suffix outx2

face_enhancer only works with videos of real people

python inference.py -i inputs/your_anime_video.mp4 -n realesr-animevideov3 --suffix outx2
  • For quick inference on Colab/Linux environment is similar to Windows but avoid using the --ffmpeg_bin and --ffprobe_bin when the binaries are already installed
  • The Vector Quantized Code book is deprecated and thus can only be used with v0.0.1:

RESULTS

ORIGINAL

sher10s.1.mp4

PROCESSED WITH DE-SRFREN

processed.mp4
Original Processed
image image
Original Processed
image image

Performance Optimizations

  1. Moved the final scaling and uint8 quantization to GPU, reducing CPU and main memory bandwidth consumption. 2.5x speed-up.
  2. Instruct FFMPEG to use RGB frames instead of BGR so no need to swap channels.
  3. Batched inference (controlled by invoking the --batch & --batches parameter, default is 4).
  4. Instruct torch to make contiguous tensors after the BCHW -> BHWC transform on GPU. So no need to copy the buffer before writing to FFMPEG . Reduced output IO time by 10x.
  5. Use NVENC pipilene when available to decode and encode the images when piping inputs

Open tasks

  1. Take a video frame and turn it into images
  2. Super resolve the image
  3. Restore the Faces in each frame step
  4. Merge Frames H.264 codec MP4
  5. Speed up inference Uses NVENC PIPE
  6. Old Video Scratch Detection
  7. Global Scene restoration

Feature Requests

  1. Frame Generation 24-60 FPS
  2. More support for different video formats
  3. Color Black and White Images
  4. Lossless Decoding and encoding
  5. Sound restoration

BibTeX

@InProceedings{clifford2023desrfren,
   author = {Clifford Njoroge},
   title = {DE-SRFREN: Video restoration Processing Pipeline},
   date = {2023}
 }

Citation

Real-ESRGAN

@InProceedings{wang2021realesrgan,
   author    = {Xintao Wang and Liangbin Xie and Chao Dong and Ying Shan},
   title     = {Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data},
   booktitle = {International Conference on Computer Vision Workshops (ICCVW)},
   date      = {2021}
}

VQFR

@inproceedings{gu2022vqfr,
  title={VQFR: Blind Face Restoration with Vector-Quantized Dictionary and Parallel Decoder},
  author={Gu, Yuchao and Wang, Xintao and Xie, Liangbin and Dong, Chao and Li, Gen and Shan, Ying and Cheng, Ming-Ming},
  year={2022},
  booktitle={ECCV}
}

GFPGAN

@InProceedings{wang2021gfpgan,
    author = {Xintao Wang and Yu Li and Honglun Zhang and Ying Shan},
    title = {Towards Real-World Blind Face Restoration with Generative Facial Prior},
    booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    year = {2021}
}

IMAGEIO

DOI