Releases: Karel-van-de-Plassche/QLKNN-develop
Generation 4 QLKNN: Integrated Modelling testing
This update includes improvements to the filtering of the dataset, as well as improvements to the training process itself. The networks trained with this pipeline were found to be of sufficient quality to be included in integrated modelling frameworks. Currently gen4 QLKNN-10D is being tested in JETTO and RAPTOR.
Major changes:
- Increased the filter version number to 10 in 70c5c29, after fixing a bug in filtering. Points should only be thrown if septot is violated on the heat fluxes, as the particle fluxes can be negative, and as such can cancel eachother out and so have a higher separate flux than total flux.
- Added models to work with the new 'rotdiv' networks. These are networks trained on an extra dataset which includes rotation. These networks can than be combined with the original (9D) networks to form the canonical QLKNN-10D.
- Added a new cost-function that punishes 'popback', or unstable flux predictions by the network in the stable region. For more information, see EPS2018 and TTF 2018 posters.
- Added models to load and run Keras models.
Misc:
- Pandas was causing some slowdown in training, replaced by pure numpy where needed
- Optimizations for RAM usage for training. Can be optimized more.
- Simplified NNDB structure. Easier querying
- Split off dataset-specific and general handling of datasets in separate scripts
- Hypercube folding and conversion to pandas can now be done OOC. Filtering still needs to be done
- Added multi-epoch TensorFlow performance tracing
- Added scripts for simple hyperparameter scans, using native Python instead of the Luigi framework + NNDB
Generation 3 QLKNN
New dataset filter with J. Citrin's stamp-of-approval: gen3
filter8
. Data IO now quicker by saving each variable separately in the netCDF4 file. Added 'victor-rule'; an analytical ExB shearing scaling rule. Networks implemented in RAPTOR, but still issues with popback and rotation.
Misc Changes:
- QLKNN-develop is now install-able as a pip-package
- Combonets/divsum nets are now able to be auto-generated from the NNDB
- particle-based variables (pf, df, vt, vc) are now quick-slicable
- Get recursive statistics from NNDB
- Training pipeline can now run on Lisa
Generation 2 QLKNN
A cleanup of the gen 1.5 networks. The Luigi pipeline now has support for training on many parallel nodes on a supercomputer, in this case Marconi. The filtering pipeline has been updated to filter7
. Quantative measures of goodness have been added (thresh mis, thresh mismatch, wobble, popback), although it is not yet known which values are 'good enough'. These measures of goodness can be determined for all networks (7D, 9D, divsum) and are saved in the NNDB.
Some misc changes:
- The quickslicer has had many stability upgrades.
- The NNDB interface now uses peewee 3.x
Generation 1.5 QLKNN
This release marks the generation 1.5 QLKNN. Most of the training script and NNDB is similar to the 2017 master thesis by @Karel-van-de-Plassche. However, there have been many bugfixes, as well as a distributed training pipeline based on luigi. Also, most of the framework for so called divsum networks has been set up, these are networks that consist of the form flux_ions/flux_electrons and flux_ions + flux_electrons.
The networks produced by this code have been extensively validated by eye in the corresponding threshold dimension. However, measures of goodness have been defined, but not sufficiently checked.
This code is optimized for CPUs. Running on GPUs is supported (and faster) but not optimized.