Skip to content

Commit

Permalink
deploy: ad150a6
Browse files Browse the repository at this point in the history
  • Loading branch information
ejolly committed Sep 19, 2023
1 parent f3d440c commit 23b963d
Show file tree
Hide file tree
Showing 29 changed files with 155 additions and 2,034 deletions.
11 changes: 11 additions & 0 deletions _sources/pages/changelog.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,16 @@
# Change Log

# 0.7.0

## New Identity Detector

- Py-feat now includes an **identity detector** via [facenet](https://github.com/timesler/facenet-pytorch). This works by projecting each detected face in an image or video frame into a 512d embedding space and clustering these embeddings using their cosine similarity (default threshold = 0.8).
- `Fex` objects include an extra column containing the identity label for each face (accessible via `Fex.identities`) as well as additional columns for each of the 512 embeddings dimensions (accessible via `Fex.identity_embeddings`). Embeddings can be useful for downstream model-training tasks.
- **Note:** identity embeddings are affected by facial expressions to some degree, and while our default threshold of 0.8 works well for many cases, you should adjust this to tailor it to your particular data.
- To save computation time, we make it possible to recompute identity labels **after** detection has been performed using the `.compute_identities(threshold=new_threshold)` method on `Fex` data objects. By default this returns a new `Fex` object with new labels in the `'Identity'` column, but can also overwrite itself in-place.
- You can also adjust the threshold at detection time using the `face_identity_threshold` keyword argument to `Detector.detect_image()` or `Detector.detect_video()`.
- Recomputing identity labels by changing the threshold **does not** change the 512d embeddings, it just adjusts how clustering is performed to get the identity labels.

# 0.6.1

## Notes
Expand Down
14 changes: 11 additions & 3 deletions _sources/pages/models.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,35 +14,43 @@ detector = Detector(emotion_model='svm')
Models names are case-insensitive: `'resmasknet' == 'ResMaskNet'`
```

## Face detection
## Face detection

- **`retinaface`: Single-stage dense face localisation** in the wild by ([Deng et al., 2019](https://arxiv.org/pdf/1905.00641v2.pdf))
- `mtcnn`: Multi-task cascaded convolutional networks by ([Zhang et al., 2016](https://arxiv.org/pdf/1604.02878.pdf); [Zhang et al., 2020](https://ieeexplore.ieee.org/document/9239720))
- `faceboxes`: A CPU real-time face detector with high accuracy by ([Zhang et al., 2018](https://arxiv.org/pdf/1708.05234v4.pdf))
- `img2pose`: Face Alignment and Detection via 6DoF, Face Pose Estimation ([Albiero et al., 2020](https://arxiv.org/pdf/2012.07791v2.pdf)). Performs simultaneous (one-shot) face detection and head pose estimation
- `img2pose-c`: A 'constrained' version of the above model, fine-tuned on images of frontal faces with pitch, roll, yaw measures in the range of (-90, 90) degrees. Shows lesser performance on difficult face detection tasks, but state-of-the-art performance on face pose estimation for frontal faces

## Facial landmark detection

- **`mobilefacenet`: Efficient CNNs for accurate real time face verification on mobile devices** ([Chen et al, 2018](https://arxiv.org/ftp/arxiv/papers/1804/1804.07573.pdf))
- `mobilenet`: Efficient convolutional neural networks for mobile vision applications ([Howard et al, 2017](https://arxiv.org/pdf/1704.04861v1.pdf))
- `pfld`: Practical Facial Landmark Detector by ([Guo et al, 2019](https://arxiv.org/pdf/1902.10859.pdf))

## Facial Pose estimation

- **`img2pose`: Face Alignment and Detection via 6DoF, Face Pose Estimation** ([Albiero et al., 2020](https://arxiv.org/pdf/2012.07791v2.pdf)). Performs simultaneous (one-shot) face detection and head pose estimation
- `img2pose-c`: A 'constrained' version of the above model, fine-tuned on images of frontal faces with pitch, roll, yaw measures in the range of (-90, 90) degrees. Shows lesser performance on hard face detection tasks, but state-of-the-art performance on head pose estimation for frontal faces.

## Action Unit detection
- **`xgb`: XGBoost Classifier model trained on Histogram of Oriented Gradients\*** extracted from BP4D, DISFA, CK+, UNBC-McMaster shoulder pain, and AFF-Wild2 datasets

- **`xgb`: XGBoost Classifier model trained on Histogram of Oriented Gradients\*** extracted from BP4D, DISFA, CK+, UNBC-McMaster shoulder pain, and AFF-Wild2 datasets
- `svm`: SVM model trained on Histogram of Oriented Gradients\*\* extracted from BP4D, DISFA, CK+, UNBC-McMaster shoulder pain, and AFF-Wild2 datasets

```{note}
\*For AU07, our `xbg` detector was trained with hinge-loss instead of cross-entropy loss like other AUs as this yielded substantially better detection peformance given the labeled data available for this AU. This means that while it returns continuous probability predictions, these are more likely to appear binary in practice (i.e. be 0 or 1) and should be interpreted as *proportion of decision-trees with a detection* rather than *average decision-tree confidence* like other AU values.
```

```{note}
\*\* Our `svm` detector uses the [`LinearSVC`](https://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html#sklearn.svm.LinearSVC) implementation from `sklearn` and thus returns **binary values** for each AU rather than probabilities. If your use-case requires continuous-valued detections, we recommend the `xgb` detector instead.
\*\* Our `svm` detector uses the [`LinearSVC`](https://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html#sklearn.svm.LinearSVC) implementation from `sklearn` and thus returns **binary values** for each AU rather than probabilities. If your use-case requires continuous-valued detections, we recommend the `xgb` detector instead.
```

## Emotion detection

- **`resmasknet`: Facial expression recognition using residual masking network** by ([Pham et al., 2020](https://ieeexplore.ieee.org/document/9411919))
- `svm`: SVM model trained on Histogram of Oriented Gradients extracted from ExpW, CK+, and JAFFE datasets

## Identity detection

- **`facenet`: FaceNet: A unified embedding for face recognition and clustering ([Schroff et al, 2015](https://arxiv.org/abs/1503.03832))**. Inception Resnet (V1) pretrained on VGGFace2 and CASIA-Webface.
1 change: 1 addition & 0 deletions basic_tutorials/01_basics.html
Original file line number Diff line number Diff line change
Expand Up @@ -233,6 +233,7 @@




<li class="toctree-l1"><a class="reference external" href="https://github.com/cosanlab/py-feat">GitHub Repository</a></li>
</ul>

Expand Down
1 change: 1 addition & 0 deletions basic_tutorials/02_detector_imgs.html
Original file line number Diff line number Diff line change
Expand Up @@ -233,6 +233,7 @@




<li class="toctree-l1"><a class="reference external" href="https://github.com/cosanlab/py-feat">GitHub Repository</a></li>
</ul>

Expand Down
1 change: 1 addition & 0 deletions basic_tutorials/03_detector_vids.html
Original file line number Diff line number Diff line change
Expand Up @@ -233,6 +233,7 @@




<li class="toctree-l1"><a class="reference external" href="https://github.com/cosanlab/py-feat">GitHub Repository</a></li>
</ul>

Expand Down
1 change: 1 addition & 0 deletions basic_tutorials/04_plotting.html
Original file line number Diff line number Diff line change
Expand Up @@ -233,6 +233,7 @@




<li class="toctree-l1"><a class="reference external" href="https://github.com/cosanlab/py-feat">GitHub Repository</a></li>
</ul>

Expand Down
1 change: 1 addition & 0 deletions basic_tutorials/05_fex_analysis.html
Original file line number Diff line number Diff line change
Expand Up @@ -233,6 +233,7 @@




<li class="toctree-l1"><a class="reference external" href="https://github.com/cosanlab/py-feat">GitHub Repository</a></li>
</ul>

Expand Down
1 change: 1 addition & 0 deletions extra_tutorials/06_trainAUvisModel.html
Original file line number Diff line number Diff line change
Expand Up @@ -233,6 +233,7 @@




<li class="toctree-l1"><a class="reference external" href="https://github.com/cosanlab/py-feat">GitHub Repository</a></li>
</ul>

Expand Down
1 change: 1 addition & 0 deletions extra_tutorials/07_extract_labels_and_landmarks.html
Original file line number Diff line number Diff line change
Expand Up @@ -233,6 +233,7 @@




<li class="toctree-l1"><a class="reference external" href="https://github.com/cosanlab/py-feat">GitHub Repository</a></li>
</ul>

Expand Down
1 change: 1 addition & 0 deletions extra_tutorials/08_train_hogs.html
Original file line number Diff line number Diff line change
Expand Up @@ -233,6 +233,7 @@




<li class="toctree-l1"><a class="reference external" href="https://github.com/cosanlab/py-feat">GitHub Repository</a></li>
</ul>

Expand Down
1 change: 1 addition & 0 deletions extra_tutorials/09_test_bbox.html
Original file line number Diff line number Diff line change
Expand Up @@ -233,6 +233,7 @@




<li class="toctree-l1"><a class="reference external" href="https://github.com/cosanlab/py-feat">GitHub Repository</a></li>
</ul>

Expand Down
1 change: 1 addition & 0 deletions extra_tutorials/10_test_lands.html
Original file line number Diff line number Diff line change
Expand Up @@ -233,6 +233,7 @@




<li class="toctree-l1"><a class="reference external" href="https://github.com/cosanlab/py-feat">GitHub Repository</a></li>
</ul>

Expand Down
1 change: 1 addition & 0 deletions extra_tutorials/11_test_Poseinfo.html
Original file line number Diff line number Diff line change
Expand Up @@ -233,6 +233,7 @@




<li class="toctree-l1"><a class="reference external" href="https://github.com/cosanlab/py-feat">GitHub Repository</a></li>
</ul>

Expand Down
1 change: 1 addition & 0 deletions extra_tutorials/12_test_aus.html
Original file line number Diff line number Diff line change
Expand Up @@ -233,6 +233,7 @@




<li class="toctree-l1"><a class="reference external" href="https://github.com/cosanlab/py-feat">GitHub Repository</a></li>
</ul>

Expand Down
1 change: 1 addition & 0 deletions extra_tutorials/13_test_emos.html
Original file line number Diff line number Diff line change
Expand Up @@ -233,6 +233,7 @@




<li class="toctree-l1"><a class="reference external" href="https://github.com/cosanlab/py-feat">GitHub Repository</a></li>
</ul>

Expand Down
Loading

0 comments on commit 23b963d

Please sign in to comment.