[go: up one dir, main page]

Skip to content
/ PAE Public

Steven C. Y. Hung, Jia-Hong Lee, Timmy S. T. Wan, Chein-Hung Chen, Yi-Ming Chan and Chu-Song Chen. "Increasingly Packing Multiple Facial-Informatics Modules in A Unified Deep-Learning Model via Lifelong Learning" 2019 ACM on International Conference on Multimedia Retrieval

License

Notifications You must be signed in to change notification settings

ivclab/PAE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

62 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Packing and Expanding (PAE)

official implementation of Increasingly Packing Multiple Facial-Informatics Modules in A Unified Deep-Learning Model via Lifelong Learning (Poster)

Created by Steven C. Y. Hung, Jia-Hong Lee, Timmy S. T. Wan, Chein-Hung Chen, Yi-Ming Chan, Chu-Song Chen

The code is released for academic research use only. For commercial use, please contact Prof. Chu-Song Chen(chusong@csie.ntu.edu.tw).

Benchmarks

PWC PWC PWC PWC PWC PWC

Introduction

Simultaneously running multiple modules is a key requirement for a smart multimedia system for facial applications including face recognition, facial expression understanding, and gender identification. To effectively integrate them, a continual learning approach to learn new tasks without forgetting is introduced. Unlike previous methods growing monotonically in size, our approach maintains the compactness in continual learning. The proposed packing-and-expanding method is effective and easy to implement, which can iteratively shrink and enlarge the model to integrate new functions. Our integrated multitask model can achieve similar accuracy with only 39.9% of the original size.

Citing Paper

Please cite following paper if these codes help your research:

@inproceedings{hung2019increasingly,
    title={Increasingly Packing Multiple Facial-Informatics Modules in A Unified Deep-Learning Model via Lifelong Learning},
    author={Hung, Steven CY and Lee, Jia-Hong and Wan, Timmy ST and Chen, Chein-Hung and Chan, Yi-Ming and Chen, Chu-Song},
    booktitle={Proceedings of the 2019 on International Conference on Multimedia Retrieval},
    pages={339--343},
    year={2019},
    organization={ACM}
}

Prerequisition

(1) If the operation system of your computer is Ubuntu 18.04, you need to follow the command to downgrade your complier:

$ sudo apt install gcc-6 g++-6
$ sudo ln -s /usr/bin/gcc-6 /usr/local/bin/gcc
$ sudo ln -s /usr/bin/g++-6 /usr/local/bin/g++

(2) Set the environmental variable of Cuda 9.0:

$ vi ~/.bashrc
export LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64:$LD_LIBRARY_PATH
export PATH=/usr/local/cuda-9.0/bin:$PATH
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-9.0/extras/CUPTI/lib64
$ source ~/.bashrc
$ tar -xzvf cudnn-9.0-linux-x64-v7.tgz
$ sudo cp cuda/include/cudnn.h /usr/local/cuda-9.0/include
$ sudo cp cuda/lib64/libcudnn* /usr/local/cuda-9.0/lib64
$ cd /usr/local/cuda-9.0/lib64
$ sudo ln -sf libcudnn.so.7.0.5 libcudnn.so.7
$ sudo ln -sf libcudnn.so.7 libcudnn.so
$ sudo ldconfig
$ sudo apt update
$ sudo apt install python3-dev python3-pip
$ sudo pip3 install -U virtualenv
$ virtualenv --system-site-packages -p python3 ./tfvenv
$ source ./tfvenv/bin/activate
$ pip install tensorflow-gpu==1.7
  • other python's library
$ pip install -r requirement.txt

Usage

Clone the PAE repository:

$ git clone --recursive https://github.com/ivclab/PAE.git

Experiment One (Face Verification, Gender and Age Modules)

  1. Download Vggface2(image size is 182x182 and its file size is 148 GB), LFW(image size is 160x160) and Adience(image size is 182x182) datasets which have been aligned by MTCNN
$ cd data
$ python download_aligned_LFW.py
$ python download_aligned_Adienceage.py
$ python download_aligned_Adiencegender.py
  1. Download all epoches of PAENet models and baseline models in all experiment (The file size of official_checkpoint.zip is 79 GB)

  2. Inference the PAENet model

$ bash src/inference_first_task.sh
$ bash src/inference_experiment1_task.sh
  1. The training strategy of Packing and Expanding. Also, training baseline models.
  • Train the first task (face verificaion) using pretrained model from FaceNet. The new PAENet models will be stored in pae_checkpoint directory and its result wiil be stored in csv directory.
$ bash src/first_task_script.sh
  • Train the second and third tasks (age and gender classification) from previous model with the weights of previous task. The new PAENet models will be stored in pae_checkpoint directory and its result will be stored in csv directory.
$ bash src/experiment1_PAE.sh

Experiment Two (Face Verification, Expression and Gender)

  1. Download Vggface2(image size is 182x182 and its file size is 148 GB), LFW(image size is 160x160), AffectNet(image size is 182x182) and FotW/Chalearn(its training data is mixed with IMDb-Wiki dataset and its image size is 182x182) datasets which have been aligned by MTCNN
$ cd data
$ python download_aligned_LFW.py
$ python download_aligned_chalearn.py
$ python download_aligned_affectnet.py
  1. Download all epoches of PAENet models and baseline models in all experiment (The file size of official_checkpoint.zip is 79 GB)

  2. Inference the PAENet model

$ bash src/inference_first_task_ex2.sh
$ bash src/inference_experiment2_task.sh
  1. The training strategy of Packing and Expanding. Also, training baseline models.
  • Train the first task (face verificaion) using pretrained model from FaceNet. The new PAENet models will be stored in pae_checkpoint directory and its result wiil be stored in csv directory.
$ bash src/first_task_script.sh
  • Train the second and third tasks (Expression and Gender classification) from previous model with the weights of previous task. The new PAENet models will be stored in pae_checkpoint directory and its result will be stored in csv directory.
$ bash src/experiment2_PAE.sh
  • If the value of Pruned rate is nan in the new task's csv file, it means that you need to expand the network to increase the space of network for new task.
# add_new_task_script.sh <GPU_ID> <TASK_NAME> <TASK_ID> <MODEL_FOLDER_NAME> <pretrained_model>
# <GPU_ID>: which GPU you want to use.
# <TASK_NAME>: the new task's name. e.g. chalearn/gender
# <TASK_ID>: the new task's id. e.g. 3
# <MODEL_FOLDER_NAME>: the directory path which you want to store the new model.
# <pretrained_model>: the directory path which the previous task's model are stored in. Please refer the csv file to select the best accuracy of the previous task's model.
$ bash src/add_new_task_script.sh 0 chalearn/gender 3 experiment2/chalearn/gender/expand pae_checkpoint/experiment2/emotion/weighted_loss/model-.ckpt-86

We enhance our PAE to become the CPG, which is published in NeurIPS, 2019.

Reference Resource

Contact

Please feel free to leave suggestions or comments to Steven C. Y. Hung(brent12052003@gmail.com), Jia-Hong Lee(honghenry.lee@gmail.com), Timmy S. T. Wan(40347905s@gmail.com), Chein-Hung Chen(redsword26@gmail.com), Yi-Ming Chan(yiming@iis.sinica.edu.tw), Chu-Song Chen(chusong@csie.ntu.edu.tw)

About

Steven C. Y. Hung, Jia-Hong Lee, Timmy S. T. Wan, Chein-Hung Chen, Yi-Ming Chan and Chu-Song Chen. "Increasingly Packing Multiple Facial-Informatics Modules in A Unified Deep-Learning Model via Lifelong Learning" 2019 ACM on International Conference on Multimedia Retrieval

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published