- np.Inf was deprecated forever and is now gone in Numpy 2.0.
- Update Black, isort, PyLint and flake8.
- Add step to run tests in CI with only bare dependencies.
- Fix
_XLA_AVAILABLE
import with old versions of torchmetrics. - Fix WandB tests.
FBeta
is using the non-deterministic torch functionbincount
. Either by passing the argumentmake_deterministic
to theFBeta
class or by using one of the PyTorch functionstorch.set_deterministic_debug_mode
ortorch.use_deterministic_algorithms
, you can now make this function deterministic. Note that this might make your code slower.
- Add
run_id
andterminate_on_end
arguments to MLFlowLogger.
Breaking change:
- In MLFlowLogger, except for
experiment_name
, all arguments must now be passed as keyword arguments. Passingexperiment_name
as a positional argument is also deprecated and will be removed in future versions.
- Remove support for Python 3.7
- Update examples using classification metrics from torchmetrics to add the now required
task
argument. - Fix the no LR scheduler bug when using PyTorch 2.0.
Breaking changes:
- The deprecated
torch_metrics
keyword argument has been removed. Users should use thebatch_metrics
orepoch_metrics
keyword argument for torchmetrics' metrics. - The deprecated
EpochMetric
class has been removed. Users should implement theMetric
class instead.
- Fix memory leak when using recursive structure as data in the
Model.fit()
or theModelBundle.train_data()
methods.
- Fix a bug when transfering the optimizer on another device caused by a new feature in PyTorch 1.12, i.e. the "capturable" parameter in Adam and AdamW.
- Add utilitary functions for saving (
save_random_states
) and loading (load_random_states
) Python, Numpy and Pytorch's (both CPU and GPU) random states. Furthermore, we also add theRandomStatesCheckpoint
callback. This callback is now used in ModelBundle.
- Remove support for Python 3.6 as PyTorch.
- Add Dockerfile
- Major bug fix: the state of the loss function was not reset after each epoch/evaluate calls so the values returned were averages for the whole lifecycle of the Model class.
- Add a WandB logger.
- Epoch and batch metrics are now unified. Their only difference is whether the metric for the batch is computed. The main interface is now the
Metric
class. It is compatible with TorchMetrics. Thus, TorchMetrics metrics can now be passed as either batch or epoch metrics. The metrics with the interfacemetric(y_pred, y_true)
are internally wrapped into aMetric
object and are still fully supported. Thetorch_metrics
keyword argument and theEpochMetric
class are now deprecated and will be removed in future versions. Model.get_batch_size
is replaced bypoutyne.get_batch_size()
.
- Add support for TorchMetrics metrics.
Experiment
is now an alias forModelBundle
, a class quite similar toExperiment
except that it allows to instantiate an "Experiment" from a Poutyne Model or a network.- Add support for PackedSequence.
- Add flag to
TensorBoardLogger
to allow to put training and validation metrics in different graphs. This allow to have a behavior closer to Keras. - Add support for fscore on binary classification.
- Add
convert_to_numpy
flag to be able to obtain tensors instead of NumPy arrays in evaluate* and predict*.
Breaking changes:
- When using epoch metrics
'f1'
,'precision'
,'recall'
and associated classes, the default average has been changed to'macro'
instead of'micro'
. This changes the names of the metrics that is displayed and that is in the log dictionnary in callbacks. This change also applies toExperiment
when usingtask='classif'
. - Exceptions when loading checkpoints in
Experiment
are now propagated instead of being silenced.
- Add
plot_history
andplot_metric
functions to easily plot the history returned by Poutyne.Experiment
also saves the figures at the end of the training. - All text files (e.g. CSVs in CSVLogger) are now saved using UTF-8 on all platforms.
- PeriodicSaveCallback and all its subclasses now have the
restore_best
argument. Experiment
now contains amonitoring
argument that can be set to false to avoid monitoring any metric and saving uneeded checkpoints.- The format of the ETA time and total time now contains days, hours, minutes when appropriate.
- Add
predict
methods to Callback to allow callback to be call during prediction phase. - Add
infer
methods to Experiment to more easily make inference (predictions) with an experiment. - Add a progress bar callback during predictions of a model.
- Add a method to compare the results of two experiments.
- Add
return_ground_truth
andhas_ground_truth
arguments topredict_dataset
andpredict_generator
.
- Add
LambdaCallback
to more easily define a callback from lambdas or functions. - In Jupyter Notebooks, when coloring is enabled, the print rate of progress output is limited to one output every 0.1 seconds. This solves the slowness problem (and the memory problem on Firefox) when there is a great number of steps per epoch.
- Add
return_dict_format
argument totrain_on_batch
andevaluate_on_batch
and allows to return predictions and ground truths inevaluate_*
even whenreturn_dict_format=True
. Furthermore,Experiment.test*
now supportreturn_pred=True
andreturn_ground_truth=True
. - Split Tips and Tricks example into two examples: Tips and Tricks and Sequence Tagging With an RNN.
- Add examples for image reconstruction and semantic segmentation with Poutyne.
- Add the following flags in
ProgressionCallback
:show_every_n_train_steps
,show_every_n_valid_steps
,show_every_n_test_steps
. They allow to show only certain steps instead of all steps. - Fix bug where all warnings were silenced.
- Add
strict
flag when loading checkpoints. In Model, a NamedTuple is returned as in PyTorch'sload_state_dict
. In Experiment, a warning is raised when there are missing or unexpected keys in the checkpoint. - In CSVLogger, when multiple learning rates are used, we use the column names
lr_group_0
,lr_group_1
, etc. instead oflr
. - Fix bug where EarlyStopping would be one epoch late and would anyway disregard the monitored metric at the last epoch.
- Bug fix for when changing the GPU device twice with optimizer having a state would crash.
- A progress bar is now set on validation a model (similar to training). It is disableable by passing
progress_options=dict(show_on_valid=False)
in thefit*
methods. - A progress bar is now set testing a model (similar to training). It is disableable by passing
verbose=False
in theevaluate*
methods. - A new notification callback
NotificationCallback
allowing to received message at specific time (start/end training/testing an at any given epoch). - A new logging callback,
MLflowLogger
, this callback allows you to log experimentation configuration and metrics during training, validation and testing. - Fix bug where
evaluate_generator
did not support generators with StopIteration exception. - Experiment now has a
train_data
and atest_data
method. - The Lambda layer now supports multiple arguments in its forward method.
- A
device
argument is added toModel
. - The argument
optimizer
ofModel
can now be a dictionary. This allows to pass different argument to the optimizer, e.g.optimizer=dict(optim='sgd', lr=0.1)
. - The progress bar now uses 20 characters instead of 25.
- The progress bar is now more fluid since partial blocks are used allowing increments of 1/8th of a block at once.
- The function
torch_to_numpy
now does .detach() before .cpu(). This might slightly improves performances in some cases. - In Experiment, the
load_checkpoint
method can now load arbitrary checkpoints by passing a filename instead of the usual argument. - Experiment now has a
train_dataset
and atest_dataset
method. - Experiment is not considered a beta feature anymore.
Breaking changes:
- In
evaluate
,dataloader_kwargs
is now a dictionary keyword argument instead of arbitrary keyword arguments. Other methods are already this way. This was an oversight of the last release.
- There is now a batch metric
TopKAccuracy
and it is possible to use them as strings fork
in 1 to 10 and 20, 30, …, 100, e.g.'top5'
. - Add
fit_dataset
,evaluate_dataset
andpredict_dataset
methods which allow to pass PyTorch Datasets and creates DataLoader internally. Here is an example with MNIST . - Colors now work correctly in Colab.
- The default colorscheme was changed so that it looks good in Colab, notebooks and command line. The previous one was not readable in Colab.
- Checkpointing callbacks now don't use the Python
tempfile
package anymore for the temporary file. The use of this package caused problem when the temp filesystem was not on the same partition as the final destination of the checkpoint. The temporary file is now created at the same place as the final destination. Thus, in most use cases, this will render the use of thetemporary_filename
argument not necessary. The argument is still available for those who need it. - In Experiment, it is not possible to call the method
test
when training without logging.
Update following bug in new PyTorch version: pytorch/pytorch#47007
- Output is now very nicely colored and now has a progress bar. Both are disableable with the
progress_options
arguments. Thecolorama
package needs to be installed to have the colors. See the documentation of the fit method for details. - Multi-GPU support: Uses
torch.nn.parallel.data_parallel
under the hood. - Huge update to the documentation with a documentation of metrics and a lot of examples.
- No need to import
framework
anymore. Everything now can be imported frompoutyne
directly, i.e.from poutyne import whatever_you_want
. PeriodicSaveCallbacks
(such asModelCheckpoint
) now has a flagkeep_only_last_best
which allow to only keep the last best checkpoint even when the names differ between epochs.FBeta
now supports anignore_index
as innn.CrossEntropyLoss
.- Epoch metrics strings
'precision'
and'recall'
now available directly without instantiatingFBeta
. - Better ETA estimation in output by weighting more recent batches than older batches.
- Batch metrics
acc
andbin_acc
now have class counterpartsAccuracy
andBinaryAccuracy
in addition to areduction
keyword argument as in PyTorch. - Various bug fixes.
- Add new callback methods
on_test_*
to callbacks. Callback can now be passed to theevaluate*
methods. - New epoch metrics for scikit-learn functions ( See documentation of SKLearnMetrics).
- It is now possible to return multiple metrics for a single batch metric function or epoch metric object. Furthermore, their names can be changed. (See note in documentation of Model class)
- Computation of batch size is now added for dictionnary inputs and outputs. ( See documentation of the new method
get_batch_size
) - Add a lot of type hinting.
Breaking changes:
- Ground truths and predictions returned by
evaluate_generator
andpredict_generator
are going to be concatenated except when inside custom objects in the next version. A warning is issued in those methods. If the warning is disabled as instructed, the new behavior takes place. (See documentation of evaluate_generator and predict_generator) - Names of methods
on_batch_begin
andon_batch_end
changed toon_train_batch_begin
andon_train_batch_end
respectively. When the old names are used, a warning is issued with backward compatibility added. This backward compatibility will be removed in the next version. EpochMetric
classes now have an obligatory reset method.- Support of Python 3.5 is dropped. (Anyway, PyTorch was already not supporting it either)
Essentially, what this means is that you can now include Poutyne into any proprietary software as long as you are willing to provide the source code and the modifications of Poutyne with your software. The LICENSE file contains more details.
This is not legal advice. You should consult your lawyer about the implication of the license for your own case.
- Fix a bug introduced in v0.7 when only one of epoch metrics and batch metrics were provided and we would try to concatenate a tuple and a list.
- Add automatic naming for class object in
batch_metrics
andepoch_metrics
. - Add get_saved_epochs method to Experiment
optimizer
parameter can now be set to None inModel
in the case where there is no need for it.- Fixes warning from new PyTorch version.
- Various improvement of the code.
Breaking changes:
- Threshold of the binary_accuracy metric is now 0 instead of 0.5 so that it works using the logits instead of the probabilities.
- The attribute
model
of theModel
class is now callednetwork
instead. A deprecation warning is in place until the next version.
- Poutyne now has a new logo!
- Add a beta
Experiment
class that encapsulates logging and checkpointing callbacks so that it is possible to stop and resume optimization at any time. - Add epoch metrics allowing to compute metrics over an epoch that are not decomposable such as F1 scores, precision, recall. While only these former epoch metrics are currently available in Poutyne, epoch metrics can allow to compute the AUROC metric, PCC metric, etc.
- Support for multiple batches per optimizer step. This allows to have smaller batches that fit in memory instead of a big batch that does not fit while retaining the advantage of the big batch.
- Add return_ground_truth argument to evaluate_generator.
- Data loading is now taken into account time for progress estimation.
- Various doc updates and example finetunings.
Breaking changes:
metrics
argument in Model is now deprecated. This argument will be removed in the next version. Usebatch_metrics
instead.pytoune
package is now removed.- If steps_per_epoch or validation_steps are greater than the generator length in *_generator methods, then the generator is cycled through instead of stopping as before.
- Update for PyTorch 1.1.
- Transfers metric modules to GPU when appropriate.
- Adding a new
OptimizerPolicy
class allowing to have Phase-based learning rate policies. The two following learning policies are also provided: - "Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates", Leslie N. Smith, Nicholay Topin, https://arxiv.org/abs/1708.07120 - "SGDR: Stochastic Gradient Descent with Warm Restarts", Ilya Loshchilov, Frank Hutter, https://arxiv.org/abs/1608.0398 - Adding of "bin_acc" metric for binary classification in addition to the "accuracy" metric".
- Adding "time" in callbacks' logs.
- Various refactoring and small bug fixes.
Breaking changes:
- Update for PyTorch 0.4.1 (PyTorch 0.4 not supported)
- Keyword arguments must now be passed with their keyword names in most PyToune functions.
Non-breaking changes:
- self.optimizer.zero_grad() is called instead of self.model.zero_grad().
- Support strings as input for all PyTorch loss functions, metrics and optimizers.
- Add support for generators that raise the StopIteration exception.
- Refactor of the Model class (no API break changes).
- Now using pylint as code style linter.
- Fix typos in documentation.
- New usage example using MNIST
- New *_on_batch methods to Model
- Every Numpy array is converted into a tensor and vice-versa everywhere it applies i.e. methods return Numpy arrays and can take Numpy arrays as input.
- New convenient simple layers (Flatten, Identity and Lambda layers)
- New callbacks to save optimizers and LRSchedulers.
- New Tensorboard callback.
- Various bug fixes and improvements.
Breaking changes:
- Update to PyTorch 0.4.0
- When one or zero metric is used, evaluate and evaluate generator do not return numpy arrays anymore.
Other changes:
- Model now offers a to() method to send the PyTorch module and its input to a specified device. (thanks PyTorch 0.4.0)
- There is now a 'accuracy' metric that can be used as string in the metrics list.
- Various bug fixes.
Last release before an upgrade with breaking changes due to the update of PyTorch 0.4.0.
- Add an on_backward_end callback function
- Add a ClipNorm callback
- Fix various bugs.
- Fix warning bugs and bad logic in checkpoints.
- Fix bug where we did not display metric when its value was equal to zero.
- ModelCheckpoint now writes off the checkpoint atomically.
- New initial_epoch parameter to Model.
- Mean of losses and metrics done with batch size weighted by len(y) instead of just the mean of the losses and metrics.
- Update to the documentation.
- Model's predict and evaluate makes more sense now and have now a generator version.
- Few other bug fixes.
Doc update
Initial version