[go: up one dir, main page]

Skip to content

Unofficial PyTorch implementation of MelGAN vocoder (Training in progress)

License

Notifications You must be signed in to change notification settings

sanghwa-ham/melgan

 
 

Repository files navigation

MelGAN

Unofficial PyTorch implementation of MelGAN vocoder (training in progress)

Key Features

  • MelGAN is lighter, faster, and better at generalizing to unseen speakers than WaveGlow.
  • This repository use identical mel-spectrogram function from NVIDIA/tacotron2, so this can be directly used to convert output from NVIDIA's tacotron2 into raw-audio.
  • TODO: Planning to publish pretrained model via PyTorch Hub.

Prerequisites

Tested on Python 3.6

pip install -r requirements.txt

Prepare Dataset

  • Download dataset for training. This can be any wav files with sample rate 22050Hz. (e.g. LJSpeech was used in paper)
  • preprocess: python preprocess.py -c config/default.yaml -d [data's root path]
  • Edit configuration yaml file

Train & Tensorboard

  • python trainer.py -c [config yaml file] -n [name of the run]
  • tensorboard --logdir logs/

Inference

  • python inference.py -p [checkpoint path] -i [input mel path]

Results

See audio samples at: http://swpark.me/melgan/.

The loss curve of G/D may look weird at first. TODO: add tensorboard image here

Implementation Authors

License

BSD 3-Clause License.

Useful resources

About

Unofficial PyTorch implementation of MelGAN vocoder (Training in progress)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%