Radek Daněček · Michael J. Black · Timo Bolkart
This repository is the official implementation of the CVPR 2022 paper EMOCA: Emotion-Driven Monocular Face Capture and Animation.
Top row: input images. Middle row: coarse shape reconstruction. Bottom row: reconstruction with detailed displacements.
EMOCA takes a single in-the-wild image as input and reconstructs a 3D face with sufficient facial expression detail to convey the emotional state of the input image. EMOCA advances the state-of-the-art monocular face reconstruction in-the-wild, putting emphasis on accurate capture of emotional content. The official project page is here.
EMOCA v2 is now out. Complete the installation steps below and go to EMOCA to test the demos.
Compared to the original model it produces:
- Much better lip and eye alignment
- Much better lip articulation
You can find the comparison video here
This is achieved by:
- Using a subset of mediapipe landmarks for mouth, eyes and eyebrows (as opposed to FAN landmarks that EMOCA v1 uses)
- Using absolute landmark loss in combination with the relative losses (as opposed to only relative landmark losses in EMOCA v1)
- Incorporating perceptual lip reading loss. Inspired by spectre. Big shout-out to these guys!
You will have to upgrade to the new environment in order to use EMOCA v2. Please follow the steps bellow to install the package. Then, go to the EMOCA subfolder and follow the steps described there.
While using the new version of this repo is recommended, you can still access the old release here.
The training and testing script for EMOCA can be found in this subfolder:
- Install conda
- Clone this repo
Pytorch3D installation (which is part of the requirements file) can unfortunately be tricky and machine specific. EMOCA was developed with is Pytorch3D 0.6.2 and the previous command includes its installation from source (to ensure its compatibility with pytorch and CUDA). If it fails to compile, you can try to find another way to install Pytorch3D.
Notes:
- EMOCA was developed with Pytorch 1.12.1 and Pytorch3d 0.6.2 running on CUDA toolkit 11.1.1 with cuDNN 8.0.5. If for some reason installation of these failed on your machine (which can happen), feel free to install these dependencies another way. The most important thing is that version of Pytorch and Pytorch3D match. The version of CUDA is probably less important.
- Some people experience import issues with opencv-python from either pip or conda. If the OpenCV version installed by the automated script does not work for you (i.e. it does not import without errors), try updating with
pip install -U opencv-python
or installing it through other means. The install script installsopencv-python~=4.5.1.48
installed viapip
.
singularity build --sandbox /tmp/conda_sandbox docker://valerianrfourel/mpi_trial:emoca
singularity build --sandbox /tmp/conda_sandbox docker://valerianrfourel/mpi_trial:emoca
mkdir /tmp/home
singularity shell --writable --nv -f -c --pwd=/tmp/home -H /tmp/home /tmp/conda_sandbox/
pip install gdown
gdown https://drive.google.com/uc?id=1Vac7NWGjYfIWTyvDNvuIKBtj5sd8cAlW
bash starting.sh
#!/bin/bash
eval "$(conda shell.bash hook)"
conda activate work39
# Place the command you want to run below
python /tmp/home/emoca/gdl_apps/EMOCA/demos/test_emoca_on_images.py --model_name EMOCA_v2_lr_cos_1.5 --output_folder /container/output
--input_folder /container/input
singularity build ./emoca.sif /tmp/conda_sandbox/
singularity shell --writable-tmpfs --nv --pwd=/tmp/home -H /tmp/home /tmp/conda_sandbox/
singularity exec --bind /home/vfourel/Corpus:/container/input,/home/vfourel/EmocaCorpus:/container/output emocaContainer.sif ./run_in_conda.sh
singularity exec --bind /home/vfourel/Corpus:/container/input,/home/vfourel/EmocaCorpus:/container/output emoca.sif ./run_in_conda.sh
This code and model are available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using the code and model you agree to the terms of this license.
There are many people who deserve to get credited. These include but are not limited to: Yao Feng and Haiwen Feng and their original implementation of DECA. Antoine Toisoul and colleagues for EmoNet.