- add markdown to doc output (#4044)
- Support PyTorch 2.4 (#4040), thanks to @tonyhoo
- Update Fastcore max version
- Support PyTorch 2.4 (#4040), thanks to @tonyhoo
- Support for loss function pickling (#4034), thanks to @kevin-vitro
- Support PyTorch 2.3 (#4026), thanks to @warner-benjamin
- Add
log
andshow_epochs
tolog_ploss
(#3964), thanks to @turbotimon
- PyTorch 2.2 support, thanks to @warner-benjamin
- PyTorch 2.1 compatibility (#3970), thanks to @warner-benjamin
- Add
MutableMapping
totorch_core.apply
to Support Moving Transformers Dicts (#3969), thanks to @warner-benjamin - Added Jaccard coefficient metric for multiclass target in segmentation (#3951), thanks to @Hazem-Ahmed-Abdelraouf
- Support TorchVision's Multi-Weight API (#3944), thanks to @warner-benjamin
- Fix the Deploy to GitHub Pages Action (#3942), thanks to @warner-benjamin
- Fix Pandas Categorical FutureWarning (#3973), thanks to @warner-benjamin
- Fix torch.jit.script on TimmBody (#3948), thanks to @johan12345
- Resolve CutMix Deprecation Warning (#3937), thanks to @warner-benjamin
- Fixed format string (#3934), thanks to @bkowshik
- Fix casting types for mps (#3912), thanks to @MSciesiek
- Fix AccumMetric name.setter (#3621), thanks to @warner-benjamin
- PyTorch 2.0 compatibility (#3890), thanks to @warner-benjamin
- Pytorch 2.0 compiler compatibility (#3899), thanks to @ggosline
- Better version support for
TensorBase.new_empty
(#3887), thanks to @warner-benjamin - TensorBase deepcopy Compatibility (#3882), thanks to @warner-benjamin
- Fix
Learn.predict
Errors Out if Passed a PILImage (#3884), thanks to @nglillywhite - Set DataLoaders device if not None and to exists (#3873), thanks to @warner-benjamin
- Fix
default_device
to correctly detect + use mps (Apple Silicon) (#3858), thanks to @wolever
- ChannelsLast Callback Improvements, Additional Documentation, & Bug Fix (#3876), thanks to @warner-benjamin
- Add support for a batch transforms
to
method (#3875), thanks to @warner-benjamin - Allow Pillow Image to be passed to PILBase.create (#3872), thanks to @warner-benjamin
- Compat with latest numpy (#3871), thanks to @warner-benjamin
- Move training-only step to separate function in
Learner
(#3857), thanks to @kunaltyagi - TabularPandas data transform reproducibility (#2826)
- Set DataLoaders device if not None and to exists (#3873), thanks to @warner-benjamin
- Fix
default_device
to correctly detect + use mps (Apple Silicon) (#3858), thanks to @wolever - Fix load hanging in distributed processes (#3839), thanks to @muellerzr
default_device
logic is repeated twice, related tomps
/ OSX support. (#3785)- revert auto-enable of mac mps due to pytorch limitations (#3769)
- Fix Classification Interpretation (#3563), thanks to @warner-benjamin
- vision tutorial failed at
learner.fine_tune(1)
(#3283)
- Add torch save and load kwargs (#3831), thanks to @JonathanGrant
- This lets us do nice things like set pickle_module to cloudpickle
- PyTorch 1.13 Compatibility (#3828), thanks to @warner-benjamin
- Recursive copying of attribute dictionaries for TensorImage subclass (#3822), thanks to @restlessronin
OptimWrapper
sets same param groups asOptimizer
(#3821), thanks to @warner-benjamin- This PR harmonizes the default parameter group setting between
OptimWrapper
andOptimizer
by modifyingOptimWrapper
to matchOptimizer
's logic.
- This PR harmonizes the default parameter group setting between
- Support normalization of 1-channel images in unet (#3820), thanks to @marib00
- Add
img_cls
param toImageDataLoaders
(#3808), thanks to @tcapelle- This is particularly useful for passing
PILImageBW
for MNIST.
- This is particularly useful for passing
- Add support for
kwargs
totensor()
when arg is anndarray
(#3797), thanks to @SaadAhmedGit - Add latest TorchVision models on fastai (#3791), thanks to @datumbox
- Option to preserve filenames in
download_images
(#2983), thanks to @mess-lelouch
get_text_classifier
fails with customAWS_LSTM
(#3817)- revert auto-enable of mac mps due to pytorch limitations (#3769)
- Workaround for performance bug in PyTorch with subclassed tensors (#3683), thanks to @warner-benjamin
- add split value argument to ColSplitter (#3737), thanks to @DanteOz
- deterministic repr for PIL images (#3762)
- option to skip default callbacks in
Learner
(#3739) - update for nbdev2 (#3747)
- IntToFloatTensor failing on Mac mps due to missing op (#3761)
- fix for pretrained in vision.learner (#3746), thanks to @peterdudfield
- fix same file error message when resizing image (#3743), thanks to @cvergnes
- Initial Mac GPU (mps) support (#3719)
- auto-normalize timm models (#3716)
- PyTorch 1.12 support
- Add
DataBlock.weighted_dataloaders
(#3706)
PIL.Resampling
only added in v9.1 (#3699)
- Update fastcore minimum version
- Distributed training now uses Hugging Face Accelerate, rather than fastai's launcher. Distributed training is now supported in a notebook -- see this tutorial for details
resize_images
creates folder structure atdest
whenrecurse=True
(#3692)- Integrate nested callable and getcallable (#3691), thanks to @muellerzr
- workaround pytorch subclass performance bug (#3682)
- Torch 1.12.0 compatibility (#3659), thanks to @josiahls
- Integrate Accelerate into fastai (#3646), thanks to @muellerzr
- New Callback event, before and after backward (#3644), thanks to @muellerzr
- Let optimizer use built torch opt (#3642), thanks to @muellerzr
- Support PyTorch Dataloaders with
DistributedDL
(#3637), thanks to @tmabraham - Add
channels_last
cb (#3634), thanks to @tcapelle - support all timm kwargs (#3631)
- send
self.loss_func
to device if it is an insatnce on nn.Module (#3395), thanks to @arampacha - adds tracking and logging best metrics to wandb cb (#3372), thanks to @arampacha
- Solve hanging
load_model
and let LRFind be ran in a distributed setup (#3689), thanks to @muellerzr - pytorch subclass functions fail if no positional args (#3687)
- Workaround for performance bug in PyTorch with subclassed tensors (#3683), thanks to @warner-benjamin
- Fix
Tokenizer.get_lengths
(#3667), thanks to @karotchykau load_learner
withcpu=False
doesn't respect the current cuda device if model exported on another; fixes #3656 (#3657), thanks to @ohmeow- [Bugfix] Fix smoothloss on distributed (#3643), thanks to @muellerzr
- WandbCallback Error: "Tensors must be CUDA and dense" on distributed training (#3291)
- vision tutorial failed at
learner.fine_tune(1)
(#3283)
- Fix
Learner
pickling problem introduced in v2.6.2
- Race condition:
'Tensor' object has no attribute 'append'
(#3385)
- add support for Ross Wightman's Pytorch Image Models (timm) library (#3624)
- rename
cnn_learner
tovision_learner
since we now support models other than CNNs too (#3625)
- Fix AccumMetric name.setter (#3621), thanks to @warner-benjamin
- Fix Classification Interpretation (#3563), thanks to @warner-benjamin
- support pytorch 1.11 (#3618)
- Add in exceptions and verbose errors (#3611), thanks to @muellerzr
- Update fastcore dep
- Support py3.10 annotations (#3601)
- Fix pin_memory=True breaking (batch) Transforms (#3606), thanks to @johan12345
- Add Python 3.9 to
setup.py
for PyPI (#3604), thanks to @nzw0301 - removes add_vert from get_grid calls (#3593), thanks to @kevinbird15
- Making
loss_not_reduced
work with DiceLoss (#3583), thanks to @hiromis - Fix bug in URLs.path() in 04_data.external (#3582), thanks to @malligaraj
- Custom name for metrics (#3573), thanks to @bdsaglam
- Update import for show_install (#3568), thanks to @fr1ll
- Fix Classification Interpretation (#3563), thanks to @warner-benjamin
- Updates Interpretation class to be memory efficient (#3558), thanks to @warner-benjamin
- Learner.show_results uses passed dataloader via dl_idx or dl arguments (#3554), thanks to @warner-benjamin
- Fix learn.export pickle error with MixedPrecision Callback (#3544), thanks to @warner-benjamin
- Fix concurrent LRFinder instances overwriting each other by using tempfile (#3528), thanks to @warner-benjamin
- Fix _get_shapes to work with dictionaries (#3520), thanks to @ohmeow
- Fix torch version checks, remove clip_grad_norm check (#3518), thanks to @warner-benjamin
- Fix nested tensors predictions compatibility with fp16 (#3516), thanks to @tcapelle
- Learning rate passed via OptimWrapper not updated in Learner (#3337)
- Different results after running
lr_find()
at different times (#3295) - lr_find() may fail if run in parallel from the same directory (#3240)
- add
at_end
feature toSaveModelCallback
(#3296), thanks to @tmabraham
- fix fp16 test (#3284), thanks to @tmabraham
- Import
download_url
from fastdownload
config.yml
has been renamed toconfig.ini
, and is now inConfigParser
format instead of YAML- THe
_path
suffixes inconfig.ini
have been removed
- Training with
learn.to_fp16(
) fails with PyTorch 1.9 / Cuda 11.4 (#3438) - pandas 1.3.0 breaks
add_elapsed_times
(#3431)
- Latest Pillow v8.3.0 breaks conversion Image to Tensor (#3416)
- QRNN module removed, due to incompatibility with PyTorch 1.9, and lack of utilization of QRNN in the deep learning community. QRNN was our only module that wasn't pure Python, so with this change fastai is now a pure Python package.
- Support for PyTorch 1.9
- Improved LR Suggestions (#3377), thanks to @muellerzr
- SaveModelCallback every nth epoch (#3375), thanks to @KeremTurgutlu
- Send self.loss_func to device if it is an instance of nn.Module (#3395), thanks to @arampacha
- Batch support for more than one image (#3339)
- Changable tfmdlists for TransformBlock, Datasets, DataBlock (#3327)
- convert TensorBBox to TensorBase during compare (#3388), thanks to @kevinbird15
- Check if normalize exists on
_add_norm
(#3371), thanks to @renato145
- Add support for pytorch 1.8 (#3349)
- Add support for spacy3 (#3348)
- Add support for Windows. Big thanks to Microsoft for many contributions to get this working
- Timedistributed layer and Image Sequence Tutorial (#3124), thanks to @tcapelle
- Add interactive run logging to AzureMLCallback (#3341), thanks to @yijinlee
- Batch support for more than one image (#3339)
- Have interp use ds_idx, add tests (#3332), thanks to @muellerzr
- Automatically have fastai determine the right device, even with torch DataLoaders (#3330), thanks to @muellerzr
- Add
at_end
feature toSaveModelCallback
(#3296), thanks to @tmabraham - Improve inplace params in Tabular's new and allow for new and test_dl to be in place (#3292), thanks to @muellerzr
- Update VSCode & Codespaces dev container (#3280), thanks to @bamurtaugh
- Add max_scale param to RandomResizedCrop(GPU) (#3252), thanks to @kai-tub
- Increase testing granularity for speedup (#3242), thanks to @ddobrinskiy
- Make TTA turn shuffle and drop_last off when using ds_idx (#3347), thanks to @muellerzr
- Add order to TrackerCallback derived classes (#3346), thanks to @muellerzr
- Prevent schedule from crashing close to the end of training (#3335), thanks to @Lewington-pitsos
- Fix ability to use raw pytorch DataLoaders (#3328), thanks to @hamelsmu
- Fix PixelShuffle_icnr weight (#3322), thanks to @pratX
- Creation of new DataLoader in Learner.get_preds has wrong keyword (#3316), thanks to @tcapelle
- Correct layers order in tabular learner (#3314), thanks to @gradientsky
- Fix vmin parameter default (#3305), thanks to @tcapelle
- Ensure call to
one_batch
places data on the right device (#3298), thanks to @tcapelle - Fix Cutmix Augmentation (#3259), thanks to @MrRobot2211
- Fix custom tokenizers for DataLoaders (#3256), thanks to @iskode
- fix error setting 'tok_tfm' parameter in TextDataloaders.from_folder
- Fix lighting augmentation (#3255), thanks to @kai-tub
- Fix CUDA variable serialization (#3253), thanks to @mszhanyi
- change batch tfms to have the correct dimensionality (#3251), thanks to @trdvangraft
- Ensure add_datepart adds elapsed as numeric column (#3230), thanks to @aberres
- fix optimwrapper to work with
param_groups
(#3241), thanks to @tmabraham- OptimWrapper now has a different constructor signature, which makes it easier to wrap PyTorch optimizers
- Support discriminative learning with OptimWrapper (#2829)
- Updated to support adding transforms to multiple dataloaders (#3268), thanks to @marii-moe
- This fixes an issue in 2.2.7 which resulted in incorrect validation metrics when using Normalization
- 2.2.5 was not released correctly - it was actually 2.2.3
- Enhancement: Let TextDataLoaders take in a custom
tok_text_col
(#3208), thanks to @muellerzr - Changed dataloaders arguments to have consistent overrides (#3178), thanks to @marii-moe
- Better support for iterable datasets (#3173), thanks to @jcaw
- BrokenProcessPool in
download_images()
on Windows (#3196) - error on predict() or using interp with resnet and MixUp (#3180)
- Fix 'cat' attribute with pandas dataframe:
AttributeError: Can only use .cat accessor with a 'category' dtype
(#3165), thanks to @dreamflasher cont_cat_split
does not support pandas types (#3156)DataBlock.dataloaders
does not support the advertised "shuffle" argument (#3133)
- Calculate correct
nf
increate_head
based onconcat_pool
(#3115), thanks to @muellerzr
- wandb integration failing with latest wandb library (#3066)
Learner.load
andLRFinder
not functioning properly for the optimizer states (#2892)
- tensorboard and wandb can not access
smooth_loss
(#3131)
- Promote
NativeMixedPrecision
to defaultMixedPrecision
(and similar forLearner.to_fp16
); oldMixedPrecision
is now calledNonNativeMixedPrecision
(#3127)- Use the new
GradientClip
callback instead of theclip
parameter to use gradient clipping
- Use the new
- Adding a
Callback
which has the same name as an attribute no longer raises an exception (#3109) - RNN training now requires
RNNCallback
, but does not requireRNNRegularizer
;out
andraw_out
have moved toRNNRegularizer
(#3108)- Call
rnn_cbs
to get all callbacks needed for RNN training, optionally with regularization
- Call
- replace callback
run_after
withorder
; do not runafter
cbs on exception (#3101)
- Add
GradientClip
callback (#3107) - Make
Flatten
cast toTensorBase
to simplify type compatibility (#3106) - make flattened metrics compatible with all tensor subclasses (#3105)
- New class method
TensorBase.register_func
to register types for__torch_function__
(#3097) - new
dynamic
flag for controlling dynamic loss scaling inNativeMixedPrecision
(#3096) - remove need to call
to_native_fp32
beforepredict
; setskipped
in NativeMixedPrecision after NaN from dynamic loss scaling (#3095) - make native fp16 extensible with callbacks (#3094)
- Calculate correct
nf
increate_head
based onconcat_pool
(#3115) thanks to @muellerzr
- Small DICOM segmentation dataset (#3034), thanks to @moritzschwyzer
NoneType object has no attribute append
in fastbook chapter 6 BIWI example (#3091)
- Refactor MixUp and CutMix into MixHandler (#3037), thanks to @muellerzr
- Refactors into a general MixHandler class, with MixUp and CutMix simply implementing a
before_batch
to perform the data augmentation. Seefastai.callback.mixup
- Refactors into a general MixHandler class, with MixUp and CutMix simply implementing a
- Gradient Accumulation + Mixed Precision shows artificially high training loss (#3048)
- Update for fastcore
negate_func
->not_
- LR too high for gradient accumulation (#3040), thanks to @marii-moe
- Torchscript transforms incompatibility with nn.Sequential (#2920)
- Pytorch 1.7 subclassing support (#2769)
- unsupported operand type(s) for +=: 'TensorCategory' and 'TensorText' when using AWD_LSTM for text classification (#3027)
- UserWarning when using SaveModelCallback() on after_epoch (#3025)
- Segmentation error: no implementation found for 'torch.nn.functional.cross_entropy' on types that implement torch_function (#3022)
TextDataLoaders.from_df()
returnsTypeError: 'float' object is not iterable
(#2978)- Internal assert error in awd_qrnn (#2967)
- Option to preserve filenames in
download_images
(#2983), thanks to @mess-lelouch - Deprecate
config
increate_cnn
and instead pass kwargs directly (#2966), thanks to @borisdayma
- Progress and Recorder callbacks serialize their data, resulting in large Learner export file sizes (#2981)
TextDataLoaders.from_df()
returnsTypeError: 'float' object is not iterable
(#2978)- "only one element tensors can be converted to Python scalars" exception in Siamese Tutorial (#2973)
- Learn.load and LRFinder not functioning properly for the optimizer states (#2892)
- remove
log_args
(#2954)
- Improve performance of
RandomSplitter
(h/t @muellerzr) (#2957)
- Exporting TabularLearner via learn.export() leads to huge file size (#2945)
TensorPoint
object has no attributeimg_size
(#2950)
- moved
has_children
fromnn.Module
to free function (#2931)
- Support persistent workers (#2768)
unet_learner
segmentation fails (#2939)- In "Transfer learning in text" tutorial, the "dls.show_batch()" show wrong outputs (#2910)
Learn.load
andLRFinder
not functioning properly for the optimizer states (#2892)- Documentation for
Show_Images
broken (#2876) - URL link for documentation for
torch_core
library from thedoc()
method gives incorrect url (#2872)
- Work around broken PyTorch subclassing of some
new_*
methods (#2769)
- PyTorch 1.7 compatibility (#2917)
PyTorch 1.7 includes support for tensor subclassing, so we have replaced much of our custom subclassing code with PyTorch's. We have seen a few bugs in PyTorch's subclassing feature, however, so please file an issue if you see any code failing now which was working before.
There is one breaking change in this version of fastai, which is that custom metadata is now stored directly in tensors as standard python attributes, instead of in the special _meta
attribute. Only advanced customization of fastai OO tensors would have used this functionality, so if you do not know what this all means, then it means you did not use it.
This version was released after 2.1.0
, and adds fastcore 1.3 compatibility, whilst maintaining PyTorch 1.6 compatibility. It has no new features or bug fixes.
The next version of fastai will be 2.1. It will require PyTorch 1.7, which has significant foundational changes. It should not require any code changes except for people doing sophisticated tensor subclassing work, but nonetheless we recommend testing carefully. Therefore, we recommend pinning your fastai version to <2.1
if you are not able to fully test your fastai code when the new version comes out.
- pin pytorch (
<1.7
) and torchvision (<0.8
) requirements (#2915) - Add version pin for fastcore
- Remove version pin for sentencepiece
- added support for tb projector word embeddings (#2853), thanks to @floleuerer
- Added ability to have variable length draw (#2845), thanks to @marii-moe
- add pip upgrade cell to all notebooks, to ensure colab has current fastai version (#2843)
- loss functions were moved to
loss.py
(#2843)
-
new callback event:
after_create
(#2842)- This event runs after a
Learner
is constructed. It's useful for initial setup which isn't needed for everyfit
, but just once for eachLearner
(such as setting initial defaults).
- This event runs after a
-
Modified XResNet to support Conv1d / Conv3d (#2744), thanks to @floleuerer
- Supports different input dimensions, kernel sizes and stride (added parameters ndim, ks, stride). Tested with fastai_audio and fastai time series with promising results.
- Undo breaking num_workers fix (#2804)
- Some users found the recent addition of
num_workers
to inference functions was causing problems, particularly on Windows. This PR reverts that change, until we find a more reliable way to handlenum_workers
for inference.
- Some users found the recent addition of
- learn.tta() fails on a learner imported with load_learner() (#2764)
- learn.summary() crashes out on 2nd transfer learning (#2735)
- Undo breaking
num_workers
fix (#2804)
- Fix
cont_cat_split
for multi-label classification (#2759) - fastbook error: "index 3 is out of bounds for dimension 0 with size 3" (#2792)
- update for fastcore 1.0.5 (#2775)
- "Remove pandas min version requirement" (#2765)
- Modify XResNet to support Conv1d / Conv3d (#2744)
- Also support different input dimensions, kernel sizes and stride (added parameters ndim, ks, stride).
- Add support for multidimensional arrays for RNNDropout (#2737)
- MCDropoutCallback to enable Monte Carlo Dropout in fastai. (#2733)
- A new callback to enable Monte Carlo Dropout in fastai in the
get_preds
method. Monte Carlo Dropout is simply enabling dropout during inference. Calling get_preds multiple times and stacking them yield of a distribution of predictions that you can use to evaluate your prediction uncertainty.
- A new callback to enable Monte Carlo Dropout in fastai in the
- adjustable workers in
get_preds
(#2721)
- Initial release of v2