- Class
RegressorUpdater
. See examples.
get_best_model
forLazy*
classes (see updated docs)- bring
LazyMTS
back - add Exponential Smoothing, ARIMA and Theta models to
ClassicalMTS
andLazy*MTS
- add
RandomForest
andXGBoost
toLazy*Classifier
andLazy*Regressor
as baselines - Add
MedianVotingRegressor
: using the median of predictions from an ensemble of regressors
- Update
LazyDeepMTS
: No moreLazyMTS
class, instead, you can useLazyDeepMTS
withn_layers=1
- Specify forecasting horizon in
LazyDeepMTS
(see updated docs and examples/lazy_mts_horizon.py) - New class
ClassicalMTS
for classsical models (for now VAR and VECM adapted from statsmodels) in multivariate time series forecasting partial_fit
forCustomClassifier
andCustomRegressor
- Copula simulation for time series residuals in classes
MTS
andDeepMTS
- based on copulas of in-sample residuals:
vine-tll
(default),vine-bb1
,vine-bb6
,vine-bb7
,vine-bb8
,vine-clayton
,vine-frank
,vine-gaussian
,vine-gumbel
,vine-indep
,vine-joe
,vine-student
scp-vine-tll
(default),scp-vine-bb1
,scp-vine-bb6
,scp-vine-bb7
,scp-vine-bb8
,scp-vine-clayton
,scp-vine-frank
,scp-vine-gaussian
,scp-vine-gumbel
,scp-vine-indep
,scp-vine-joe
,scp-vine-student
scp2-vine-tll
,scp2-vine-bb1
,scp2-vine-bb6
,scp2-vine-bb7
,scp2-vine-bb8
,scp2-vine-clayton
,scp2-vine-frank
,scp2-vine-gaussian
,scp2-vine-gumbel
,scp2-vine-indep
,scp2-vine-joe
,scp2-vine-student
- based on copulas of in-sample residuals:
cross_val_score
: time series cross-validation forMTS
andDeepMTS
Technical:- Do not scale sparse matrices before training
- Add
MaxAbsScaler
- Implement new types of predictive simulation intervals (
type_pi
s): independent bootstrap, block bootstrap, 2 variants of split conformal prediction in classMTS
(see updated docs) - Gaussian prediction intervals
type_pi == "gaussian"
in classMTS
- Implement Winkler score in
LazyMTS
andLazyDeepMTS
for probabilistic forecasts - Use conformalized
Estimator
s inMTS
(seeexamples/mts_conformal_not_sims.py
) - Include
block_size
for block bootstrapping methods for*MTS
classes
Technical:
- Import
all_estimators
fromsklearn.utils
- Use both
sparse
andsparse_output
inOneHotEncoder
(for compatibility with older versions of sklearn)
- Bayesian
CustomRegressor
- Conformalized
CustomRegressor
(splitconformal
andlocalconformal
for now) - See this example, this example, and this notebook
self.n_classes_ = len(np.unique(y))
# for compatibility with sklearn
preprocess
ing for allLazyDeep*
- Attribute
estimators
(a list ofEstimator
's as strings) forLazyClassifier
,LazyRegressor
,LazyDeepClassifier
,LazyDeepRegressor
,LazyMTS
, andLazyDeepMTS
- New documentation for the package, using
pdoc
(notpdoc3
) - Remove external regressors
xreg
at inference time forMTS
andDeepMTS
- New class
Downloader
: querying the R universe API for datasets (see https://thierrymoudiki.github.io/blog/2023/12/25/python/r/misc/mlsauce/runiverse-api2 for similar example inmlsauce
) - Add custom metric to
Lazy*
- Rename Deep regressors and classifiers to
Deep*
inLazy*
- Add attribute
sort_by
toLazy*
-- sort the data frame output by a given metric - Add attribute
classes_
to classifiers (ensure consistency with sklearn)
- Subsample response by using the number of rows, not only a percentage (see https://thierrymoudiki.github.io/blog/2024/01/22/python/nnetsauce-subsampling)
- Improve consistency with sklearn's v1.2, for
OneHotEncoder
- add robust scaler
- relatively faster scaling in preprocessing
- Regression-based classifiers (see https://www.researchgate.net/publication/377227280_Regression-based_machine_learning_classifiers)
DeepMTS
(multivariate time series forecasting with deep quasi-random layers): see https://thierrymoudiki.github.io/blog/2024/01/15/python/quasirandomizednn/forecasting/DeepMTS- AutoML for
MTS
(multivariate time series forecasting): see https://thierrymoudiki.github.io/blog/2023/10/29/python/quasirandomizednn/MTS-LazyPredict - AutoML for
DeepMTS
(multivariate time series forecasting): see https://github.com/Techtonique/nnetsauce/blob/master/nnetsauce/demo/thierrymoudiki_20240106_LazyDeepMTS.ipynb - Spaghetti plots for
MTS
andDeepMTS
(multivariate time series forecasting): see https://thierrymoudiki.github.io/blog/2024/01/15/python/quasirandomizednn/forecasting/DeepMTS - Subsample continuous and discrete responses
- actually implement deep
Estimator
s in/deep
(in addition to/lazypredict
) - include new multi-output regression-based classifiers (see https://thierrymoudiki.github.io/blog/2021/09/26/python/quasirandomizednn/classification-using-regression for more details)
- use proper names for
Estimator
s in/lazypredict
and/deep
- expose
SubSampler
(stratified subsampling) to the external API
- lazy predict for classification and regression (see https://thierrymoudiki.github.io/blog/2023/10/22/python/quasirandomizednn/nnetsauce-lazy-predict-preview)
- lazy predict for multivariate time series (see https://thierrymoudiki.github.io/blog/2023/10/29/python/quasirandomizednn/MTS-LazyPredict)
- lazy predict for deep classifiers and regressors (see this example for classification and this example for regression)
- update and align as much as possible with R version
- colored graphics for class MTS
- Fix error in nodes' simulation (base.py)
- Use residuals and KDE for predictive simulations
plot
method for MTS objects
- Begin residuals simulation
- Avoid division by zero in scaling
- less dependencies in setup
- Implement RandomBagRegressor
- Use of a DataFrame in MTS
- rename attributes with underscore
- add more examples to documentation
- Fix numbers' simulations
- Remove memoize from Simulator
- loosen the range of Python packages versions
- Add Poisson and Laplace regressions to GLMRegressor
- Remove smoothing weights from MTS
- Use C++ for simulation
- Fix R Engine problem
- RandomBag classifier cythonized
- Documentation with MkDocs
- Cython-ready
- contains a refactorized code for the
Base
class, and for many other utilities. - makes use of randtoolbox for a faster, more scalable generation of quasi-random numbers.
- contains a (work in progress) implementation of most algorithms on GPUs, using JAX. Most of the nnetsauce's changes related to GPUs are currently made on potentially time consuming operations such as matrices multiplications and matrices inversions.
- (Work in progress) documentation in
/docs
MultitaskClassifier
- Rename
Mtask
toMultitask
- Rename
Ridge2ClassifierMtask
toRidge2MultitaskClassifier
- Use "return_std" only in predict for MTS object
- Fix for potential error "Sample weights must be 1D array or scalar"
- One-hot encoding not cached (caused errs on multitask ridge2 classifier)
- Rename ridge to ridge2 (2 shrinkage params compared to ridge)
- Implement ridge2 (regressor and classifier)
- Upper bound on Adaboost error
- Test Time series split
- Add AdaBoost classifier
- Add RandomBag classifier (bagging)
- Add multinomial logit Ridge classifier
- Remove dependency to package
sobol_seq
(not used)
- Initial version