This repository contains code for Python implementations of the beta-divergence loss, including implementations compatible NumPy and PyTorch.
This library is written in Python, and requires Python (with recommended version >= 3.9) to run. In addition to a working PyTorch installation, this library relies on the following libraries and recommended version numbers:
To install the latest stable release, use pip. Use the following command to install:
$ pip install beta-divergence-metrics
The numpybd.loss
module contains two beta-divergence function implementations compatible with NumPy and NumPy arrays: one general beta-divergence between two arrays, and a beta-divergence implementation specific to non-negative matrix factorization (NMF). Similarly torchbd.loss
module contains two beta-divergence class implementations compatible with PyTorch and PyTorch tensors. Beta-divergence implementations can be imported as follows:
# Import beta-divergence loss implementations
from numpybd.loss import *
from torchbd.loss import *
To calculate the beta-divergence between a NumPy array a
and a target or reference array b
, use the beta_div
loss function. The beta_div
loss function can be used as follows:
# Calculate beta-divergence loss between array a and target array b
loss_val = beta_div(beta=0, reduction='mean')
To calculate the beta-divergence between tensor a
and a target or reference tensor b
, use the BetaDivLoss
loss function. The BetaDivLoss
loss function can be instantiated and used as follows:
# Instantiate beta-divergence loss object
loss_func = BetaDivLoss(beta=0, reduction='mean')
# Calculate beta-divergence loss between tensor a and target tensor b
loss_val = loss_func(input=a, target=b)
To calculate the NMF-specific beta-divergence between a NumPy array of data matrix X
and the product of a scores matrix H
and a components matrix W
, use the nmf_beta_div
loss function. The nmf_beta_div
loss function can beused as follows:
# Calculate beta-divergence loss between data matrix X (target or
# reference matrix) and matrix product of H and W
loss_val = nmf_beta_div(X=X, H=H, W=W, beta=0, reduction='mean')
To calculate the NMF-specific beta-divergence between a PyTorch tensor of data matrix X
and the matrix product of a scores matrix H
and a components matrix W
, use the NMFBetaDivLoss
loss class function. The NMFBetaDivLoss
loss function can be instantiated and used as follows:
# Instantiate NMF beta-divergence loss object
loss_func = NMFBetaDivLoss(beta=0, reduction='mean')
# Calculate beta-divergence loss between data matrix X (target or
# reference matrix) and matrix product of H and W
loss_val = loss_func(X=X, H=H, W=W)
When instantiating beta-divergence loss objects, the value of beta should be chosen depending on data type and application. For NMF applications, a beta value of 0 (Itakura-Saito divergence) is recommemded. Integer values of beta correspond to the following divergences and loss functions:
- beta = 0: Itakura-Saito divergence
- beta = 1: Kullback-Leibler divergence
- beta = 2: mean-squared error
Please use the GitHub issue tracker associated with this repository for issue tracking, filing bug reports, and asking general questions about the package or project.