Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documenter 1.0 upgrade + LanguageTool #176

Merged
merged 2 commits into from
Sep 24, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .JuliaFormatter.toml
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
style = "sciml"
style = "sciml"
format_markdown = true
4 changes: 4 additions & 0 deletions .github/workflows/CI.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,13 @@ on:
pull_request:
branches:
- master
paths-ignore:
- 'docs/**'
push:
branches:
- master
paths-ignore:
- 'docs/**'
jobs:
test:
runs-on: ubuntu-latest
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -4,3 +4,4 @@
.DS_Store
/Manifest.toml
/dev/
docs/build
2 changes: 1 addition & 1 deletion docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ StatsBase = "2913bbd2-ae8a-5f71-8c99-4fb6c76f3a91"
CUDA = "3, 4, 5"
CellularAutomata = "0.0.2"
DifferentialEquations = "7"
Documenter = "0.27"
Documenter = "1"
OrdinaryDiffEq = "6"
Plots = "1"
PredefinedDynamicalSystems = "1"
Expand Down
13 changes: 3 additions & 10 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -8,17 +8,10 @@ ENV["GKSwstype"] = "100"
include("pages.jl")

makedocs(modules = [ReservoirComputing],
clean = true, doctest = false,
sitename = "ReservoirComputing.jl",
strict = [
:doctest,
:linkcheck,
:parse_error,
:example_block,
# Other available options are
# :autodocs_block, :cross_references, :docs_block, :eval_block, :example_block, :footnote, :meta_block, :missing_docs, :setup_block
],
format = Documenter.HTML(analytics = "UA-90474609-3",
clean = true, doctest = false, linkcheck = true,
warnonly = [:missing_docs],
format = Documenter.HTML(
assets = ["assets/favicon.ico"],
canonical = "https://docs.sciml.ai/ReservoirComputing/stable/"),
pages = pages)
Expand Down
4 changes: 2 additions & 2 deletions docs/src/api/esn.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,13 @@
ESN
```

In addition to all the components that can be explored in the documentation a couple need a separate introduction. The ```variation``` arguments can be
In addition to all the components that can be explored in the documentation, a couple components need a separate introduction. The ```variation``` arguments can be
```@docs
Default
Hybrid
```

These arguments detail more deep variation of the underlying model and they need a separate call. For the moment the most complex is the ```Hybrid``` call, but this can and will change in the future.
These arguments detail a deeper variation of the underlying model, and they need a separate call. For the moment, the most complex is the ```Hybrid``` call, but this can and will change in the future.
All ESN models can be trained using the following call:
```@docs
train
Expand Down
2 changes: 1 addition & 1 deletion docs/src/api/esn_drivers.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
MRNN
GRU
```
The ```GRU``` driver also provides the user the choice of the possible variant:
The ```GRU``` driver also provides the user with the choice of the possible variants:
```@docs
FullyGated
Minimal
Expand Down
8 changes: 4 additions & 4 deletions docs/src/api/esn_layers.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
MinimumLayer
NullLayer
```
The sign in the ```MinimumLayer``` are chosen based on the following methods:
The signs in the ```MinimumLayer``` are chosen based on the following methods:
```@docs
BernoulliSample
IrrationalSample
Expand All @@ -18,7 +18,7 @@ To derive the matrix one can call the following function:
```@docs
create_layer
```
To create new input layers it suffice to define a new struct containing the needed parameters of the new input layer. This struct wiil need to be an ```AbstractLayer```, so the ```create_layer``` function can be dispatched over it. The workflow should follow this snippet:
To create new input layers, it suffices to define a new struct containing the needed parameters of the new input layer. This struct will need to be an ```AbstractLayer```, so the ```create_layer``` function can be dispatched over it. The workflow should follow this snippet:
```julia
#creation of the new struct for the layer
struct MyNewLayer <: AbstractLayer
Expand All @@ -42,12 +42,12 @@ end
NullReservoir
```

Like for the input layers, to actually build the matrix of the reservoir one can call the following function:
Like for the input layers, to actually build the matrix of the reservoir, one can call the following function:
```@docs
create_reservoir
```

To create a new reservoir the procedure is imilar to the one for the input layers. First the definition of the new struct of type ```AbstractReservoir``` with the reservoir parameters is needed. Then the dispatch over the ```create_reservoir``` function makes the model actually build the reservoir matrix. An example of the workflow is given in the following snippet:
To create a new reservoir, the procedure is similar to the one for the input layers. First, the definition of the new struct of type ```AbstractReservoir``` with the reservoir parameters is needed. Then the dispatch over the ```create_reservoir``` function makes the model actually build the reservoir matrix. An example of the workflow is given in the following snippet:
```julia
#creation of the new struct for the reservoir
struct MyNewReservoir <: AbstractReservoir
Expand Down
2 changes: 1 addition & 1 deletion docs/src/api/reca.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,4 @@ The input encodings are the equivalent of the input matrices of the ESNs. These
RandomMapping
```

The training and prediction follow the same workflow of the ESN. It is important to note that at the moment we were not able to find any paper using these models with a ```Generative``` approach for the prediction, so full support is given only to the ```Predictive``` method.
The training and prediction follow the same workflow as the ESN. It is important to note that currently we were unable to find any papers using these models with a ```Generative``` approach for the prediction, so full support is given only to the ```Predictive``` method.
4 changes: 2 additions & 2 deletions docs/src/api/training.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
```

## Gaussian Regression
Currently (v0.9) unavailable.
Currently, v0.9 is unavailable.

## Support Vector Regression
Support vector Regression is possible using a direct call to [LIBSVM](https://github.com/JuliaML/LIBSVM.jl) regression methods. Instead of a wrapper please refer to the use of ```LIBSVM.AbstractSVR``` in the original library.
Support Vector Regression is possible using a direct call to [LIBSVM](https://github.com/JuliaML/LIBSVM.jl) regression methods. Instead of a wrapper, please refer to the use of ```LIBSVM.AbstractSVR``` in the original library.
12 changes: 6 additions & 6 deletions docs/src/esn_tutorials/change_layers.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
# Using Different Layers
A great deal of efforts in the ESNs field are devoted to finding an ideal construction for the reservoir matrices. With a simple interface using ReservoirComputing.jl is possible to leverage the currently implemented matrix constructions methods for both the reservoir and the input layer. In this page it is showcased how it is possible to change both of these layers.
A great deal of effort in the ESNs field is devoted to finding the ideal construction for the reservoir matrices. With a simple interface using ReservoirComputing.jl it is possible to leverage the currently implemented matrix construction methods for both the reservoir and the input layer. On this page, it is showcased how it is possible to change both of these layers.

The `input_init` keyword argument provided with the `ESN` constructor allows for changing the input layer. The layers provided in ReservoirComputing.jl are the following:
- ```WeightedLayer(scaling)```
- ```DenseLayer(scaling)```
- ```SparseLayer(scaling, sparsity)```
- ```MinimumLayer(weight, sampling)```
- ```InformedLayer(model_in_size; scaling=0.1, gamma=0.5)```
In addition the user can define a custom layer following this workflow:
In addition, the user can define a custom layer following this workflow:
```julia
#creation of the new struct for the layer
struct MyNewLayer <: AbstractLayer
Expand Down Expand Up @@ -39,10 +39,10 @@ function create_reservoir(reservoir::AbstractReservoir, res_size)
end
```

## Example of minimally complex ESN
Using [^1] and [^2] as references this section will provide an example on how to change both the input layer and the reservoir for ESNs. The full script for this example can be found [here](https://github.com/MartinuzziFrancesco/reservoir-computing-examples/blob/main/change_layers/layers.jl). This example was run on Julia v1.7.2.
## Example of a minimally complex ESN
Using [^1] and [^2] as references, this section will provide an example of how to change both the input layer and the reservoir for ESNs. The full script for this example can be found [here](https://github.com/MartinuzziFrancesco/reservoir-computing-examples/blob/main/change_layers/layers.jl). This example was run on Julia v1.7.2.

The task for this example will be the one step ahead prediction of the Henon map. To obtain the data one can leverage the package [DynamicalSystems.jl](https://juliadynamics.github.io/DynamicalSystems.jl/dev/). The data is scaled to be between -1 and 1.
The task for this example will be the one step ahead prediction of the Henon map. To obtain the data, one can leverage the package [DynamicalSystems.jl](https://juliadynamics.github.io/DynamicalSystems.jl/dev/). The data is scaled to be between -1 and 1.
```@example mesn
using PredefinedDynamicalSystems
train_len = 3000
Expand Down Expand Up @@ -79,7 +79,7 @@ for i=1:length(reservoirs)
println(msd(testing_target, output))
end
```
As it is possible to see, changing layers in ESN models is straightforward. Be sure to check the API documentation for a full list of reservoir and layers.
As it is possible to see, changing layers in ESN models is straightforward. Be sure to check the API documentation for a full list of reservoirs and layers.


## Bibliography
Expand Down
16 changes: 8 additions & 8 deletions docs/src/esn_tutorials/deep_esn.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
# Deep Echo State Networks

Deep Echo State Network architectures started to gain some traction recently. In this guide we illustrate how it is possible to use ReservoirComputing.jl to build a deep ESN.
Deep Echo State Network architectures started to gain some traction recently. In this guide, we illustrate how it is possible to use ReservoirComputing.jl to build a deep ESN.

The network implemented in this library is taken from [^1]. It works by stacking reservoirs on top of each other, feeding the output on one in the next. The states are obtained by merging all the inner states of the stacked reservoirs. For a more in depth explanation refer to the paper linked above. The full script for this example can be found [here](https://github.com/MartinuzziFrancesco/reservoir-computing-examples/blob/main/deep-esn/deepesn.jl). This example was run on Julia v1.7.2.
The network implemented in this library is taken from [^1]. It works by stacking reservoirs on top of each other, feeding the output from one into the next. The states are obtained by merging all the inner states of the stacked reservoirs. For a more in-depth explanation, refer to the paper linked above. The full script for this example can be found [here](https://github.com/MartinuzziFrancesco/reservoir-computing-examples/blob/main/deep-esn/deepesn.jl). This example was run on Julia v1.7.2.

## Lorenz Example
For this example we are going to reuse the Lorenz data used in the [Lorenz System Forecasting](@ref) example.
For this example, we are going to reuse the Lorenz data used in the [Lorenz System Forecasting](@ref) example.
```@example deep_lorenz
using OrdinaryDiffEq

Expand All @@ -31,7 +31,7 @@ target_data = data[:, shift+1:shift+train_len]
test_data = data[:,shift+train_len+1:shift+train_len+predict_len]
```

Again, it is *important* to notice that the data needs to be formatted in a matrix with the features as rows and time steps as columns like it is done in this example. This is needed even if the time series consists of single values.
Again, it is *important* to notice that the data needs to be formatted in a matrix, with the features as rows and time steps as columns, as in this example. This is needed even if the time series consists of single values.

The construction of the ESN is also really similar. The only difference is that the reservoir can be fed as an array of reservoirs.
```@example deep_lorenz
Expand All @@ -50,11 +50,11 @@ esn = ESN(input_data;
states_type = StandardStates())
```

As it is possible to see, different sizes can be chosen for the different reservoirs. The input layer and bias can also be given as vectors, but of course they have to be of the same size of the reservoirs vector. If they are not passed as a vector, the value passed is going to be used for all the layers in the deep ESN.
As it is possible to see, different sizes can be chosen for the different reservoirs. The input layer and bias can also be given as vectors, but of course, they have to be of the same size of the reservoirs vector. If they are not passed as a vector, the value passed will be used for all the layers in the deep ESN.

In addition to using the provided functions for the construction of the layers the user can also choose to build their own matrix, or array of matrices, and feed that into the `ESN` in the same way.
In addition to using the provided functions for the construction of the layers, the user can also choose to build their own matrix, or array of matrices, and feed that into the `ESN` in the same way.

The training and prediction follows the usual framework:
The training and prediction follow the usual framework:
```@example deep_lorenz
training_method = StandardRidge(0.0)
output_layer = train(esn, target_data, training_method)
Expand Down Expand Up @@ -83,7 +83,7 @@ plot(p1, p2, p3, plot_title = "Lorenz System Coordinates",
legendfontsize=12, titlefontsize=20)
```

Note that there is a known bug at the moment with using `WeightedLayer` as the input layer with the deep ESN. We are in the process of investigating and solving it. The leak coefficient for the reservoirs has to always be the same with the current implementation. This is also something we are actively looking into expanding.
Note that there is a known bug at the moment with using `WeightedLayer` as the input layer with the deep ESN. We are in the process of investigating and solving it. The leak coefficient for the reservoirs has to always be the same in the current implementation. This is also something we are actively looking into expanding.

## Documentation
[^1]: Gallicchio, Claudio, and Alessio Micheli. "_Deep echo state network (deepesn): A brief survey._" arXiv preprint arXiv:1712.04323 (2017).
Loading
Loading