Releases: khalil-research/PyEPO
v0.3.9
🎉 We're happy to announce the PyEPO 0.3.9 release. 🎉
We're thrilled to bring you an exciting new feature in this release:
We are excited to announce the addition of a new module, optDatasetKNN, thanks to @NoahJSchutte. This module is designed for implementing k-nearest neighbors (kNN) robust loss in decision-focused learning. The implementation introduces a new class, optDatasetKNN
in dataset.py
with the parameters k and weight.
This feature is based on the paper Robust Losses for Decision-Focused Learning by Noah Schutte, which has been accepted at IJCAI. You can explore this feature in our Google Colab tutorial for hands-on guidance.
We're eager for you to test these out and share your feedback with us. As always, thank you for being a part of our growing community!
v0.3.8a
v0.3.8
🎉 We're happy to announce the PyEPO 0.3.8 release. 🎉
We're thrilled to bring you some exciting new features in this release:
We add a data generator pyepo.data.portfolio.genData
for portfolio optimization and the corresponding Gurobi model pyepo.model.grb.portfolioModel
. See details in our docs for data and optimization model.
This synthetic dataset comes from Smart “Predict, then Optimize” with detailed implementation guidelines provided in Appendix-D of the supplemental material.
Additionally, we have addressed several minor bugs to ensure a smoother user experience.
We're eager for you to test these out and share your feedback with us. As always, thank you for being a part of our growing community!
v0.3.7
🎉 We're happy to announce the PyEPO 0.3.7 release. 🎉
We're thrilled to bring you some exciting new features in this release:
We add an autograd module pyepo.func.adaptiveImplicitMLE
, which uses the perturb-and-MAP framework and adaptively chooses the interpolation step size. This module samples noise perturbation from a Sum-of-Gamma distribution, subsequently interpolating the loss function for a more precise finite difference approximation. There is the corresponding paper Adaptive Perturbation-Based Gradient Estimation for Discrete Latent Variable Models. See details in our docs.
We're eager for you to test these out and share your feedback with us. As always, thank you for being a part of our growing community!
v0.3.6
v0.3.5
🎉 We're happy to announce the PyEPO 0.3.5 release. 🎉
We're thrilled to bring you some exciting new features in this release:
- We add an autograd module
pyepo.func.implicitMLE
, which uses the perturb-and-MAP framework. This module samples noise perturbation from a Sum-of-Gamma distribution, subsequently interpolating the loss function for a more precise finite difference approximation. There is the corresponding paper Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions. See details in our docs. - PyEPO is now compatible with COPT (Cardinal Optimizer) API, one of the fastest solvers for various optimization problems.
We're eager for you to test these out and share your feedback with us. As always, thank you for being a part of our growing community!
v0.3.3
We're happy to announce the 0.3.3 release.
We fix the sign bug of pyepo.func.NCE
, and add modules pyepo.func.contrastiveMAP
and pyepo.func.negativeIdentity
. See details in our docs.
There are the corresponding papers Contrastive losses and solution caching for predict-and-optimize and Backpropagation through combinatorial algorithms: Identity with projection works.
v0.3.0
We're happy to announce the 0.3.0 release.
Thank @ijskar to add new end-to-end predict-then-optimize methods: noise contrastive estimation and learning to rank. We add modules pyepo.func.NCE
, pyepo.func.pointwiseLTR
, pyepo.func.pairwiseLTR
and pyepo.func.listwiseLTR
. See details in our docs.
There are the corresponding papers Contrastive losses and solution caching for predict-and-optimize and Decision-focused learning: through the lens of learning to rank.
v0.2.4
v0.2.0
We're happy to announce the 0.2.0 release.
We add two end-to-end predict-then-optimize methods with stochastical perturbation: Differentiable Perturbed Optimizers and Fenchel-Young loss with Perturbation into our package "PyEPO."
People now are allowed to use PyTorch module pyepo.func.perturbedOpt
and pyepo.func.perturbedFenchelYoung
. See details in our docs.
Both approaches come from Google Research's awesome project Differentiable Optimizers with Perturbations in Tensorflow, and there is the corresponding paper Learning with differentiable perturbed optimizers.