This model is a deep convolutional neural network for emotion recognition in faces.
Model | Download | Download (with sample test data) | ONNX version | Opset version |
---|---|---|---|---|
Emotion FERPlus | 34 MB | 31 MB | 1.0 | 2 |
Emotion FERPlus | 34 MB | 31 MB | 1.2 | 7 |
Emotion FERPlus | 34 MB | 31 MB | 1.3 | 8 |
Emotion FERPlus int8 | 19 MB | 18 MB | 1.14 | 12 |
"Training Deep Networks for Facial Expression Recognition with Crowd-Sourced Label Distribution" arXiv:1608.01041
The model is trained on the FER+ annotations for the standard Emotion FER dataset, as described in the above paper.
The model is trained in CNTK, using the cross entropy training mode. You can find the source code here.
Run Emotion_FERPlus in browser - implemented by ONNX.js with Emotion_FERPlus version 1.2
The model expects input of the shape (Nx1x64x64)
, where N
is the batch size.
Given a path image_path
to the image you would like to score:
import numpy as np
from PIL import Image
def preprocess(image_path):
input_shape = (1, 1, 64, 64)
img = Image.open(image_path)
img = img.resize((64, 64), Image.ANTIALIAS)
img_data = np.array(img)
img_data = np.resize(img_data, input_shape)
return img_data
The model outputs a (1x8)
array of scores corresponding to the 8 emotion classes, where the labels map as follows:
emotion_table = {'neutral':0, 'happiness':1, 'surprise':2, 'sadness':3, 'anger':4, 'disgust':5, 'fear':6, 'contempt':7}
Route the model output through a softmax function to map the aggregated activations across the network to probabilities across the 8 classes.
import numpy as np
def softmax(scores):
# your softmax function
def postprocess(scores):
'''
This function takes the scores generated by the network and returns the class IDs in decreasing
order of probability.
'''
prob = softmax(scores)
prob = np.squeeze(prob)
classes = np.argsort(prob)[::-1]
return classes
Sets of sample input and output files are provided in
- serialized protobuf TensorProtos (
.pb
), which are stored in the folderstest_data_set_*/
.
Emotion FERPlus int8 is obtained by quantizing fp32 Emotion FERPlus model. We use Intel® Neural Compressor with onnxruntime backend to perform quantization. View the instructions to understand how to use Intel® Neural Compressor for quantization.
Download model from ONNX Model Zoo.
wget https://github.com/onnx/models/raw/main/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-8.onnx
Convert opset version to 12 for more quantization capability.
import onnx
from onnx import version_converter
model = onnx.load('emotion-ferplus-8.onnx')
model = version_converter.convert_version(model, 12)
onnx.save_model(model, 'emotion-ferplus-12.onnx')
cd neural-compressor/examples/onnxrt/body_analysis/onnx_model_zoo/emotion_ferplus/quantization/ptq_static
bash run_tuning.sh --input_model=path/to/model \ # model path as *.onnx
--dataset_location=/path/to/data \
--output_model=path/to/save
MIT