Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Documentation updates #182

Merged
merged 6 commits into from
Dec 18, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "ReservoirComputing"
uuid = "7c2d2b1e-3dd4-11ea-355a-8f6a8116e294"
authors = ["Francesco Martinuzzi"]
version = "0.9.4"
version = "0.9.5"

[deps]
Adapt = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
Expand Down
54 changes: 32 additions & 22 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,13 @@

[![Join the chat at https://julialang.zulipchat.com #sciml-bridged](https://img.shields.io/static/v1?label=Zulip&message=chat&color=9558b2&labelColor=389826)](https://julialang.zulipchat.com/#narrow/stream/279055-sciml-bridged)
[![Global Docs](https://img.shields.io/badge/docs-SciML-blue.svg)](https://docs.sciml.ai/ReservoirComputing/stable/)
[![arXiv](https://img.shields.io/badge/arXiv-2204.05117-00b300.svg)](https://arxiv.org/abs/2204.05117)
[![arXiv](https://img.shields.io/badge/arXiv-2204.05117-00b300.svg)](https://arxiv.org/abs/2204.05117)

[![codecov](https://codecov.io/gh/SciML/ReservoirComputing.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/SciML/ReservoirComputing.jl)
[![Build Status](https://github.com/SciML/ReservoirComputing.jl/workflows/CI/badge.svg)](https://github.com/SciML/ReservoirComputing.jl/actions?query=workflow%3ACI)
[![Build status](https://badge.buildkite.com/db8f91b89a10ad79bbd1d9fdb1340e6f6602a1c0ed9496d4d0.svg)](https://buildkite.com/julialang/reservoircomputing-dot-jl)

[![ColPrac: Contributor's Guide on Collaborative Practices for Community Packages](https://img.shields.io/badge/ColPrac-Contributor's%20Guide-blueviolet)](https://github.com/SciML/ColPrac)
[![ColPrac: Contributor's Guide on Collaborative Practices for Community Packages](https://img.shields.io/badge/ColPrac-Contributor%27s%20Guide-blueviolet)](https://github.com/SciML/ColPrac)
[![SciML Code Style](https://img.shields.io/static/v1?label=code%20style&message=SciML&color=9558b2&labelColor=389826)](https://github.com/SciML/SciMLStyle)

![rc_full_logo_large_white_cropped](https://user-images.githubusercontent.com/10376688/144242116-8243f58a-5ac6-4e0e-88d5-3409f00e20b4.png)
Expand All @@ -23,58 +23,68 @@ To illustrate the workflow of this library we will showcase how it is possible t
using ReservoirComputing, OrdinaryDiffEq

#lorenz system parameters
u0 = [1.0,0.0,0.0]
tspan = (0.0,200.0)
p = [10.0,28.0,8/3]
u0 = [1.0, 0.0, 0.0]
tspan = (0.0, 200.0)
p = [10.0, 28.0, 8 / 3]

#define lorenz system
function lorenz(du,u,p,t)
du[1] = p[1]*(u[2]-u[1])
du[2] = u[1]*(p[2]-u[3]) - u[2]
du[3] = u[1]*u[2] - p[3]*u[3]
function lorenz(du, u, p, t)
du[1] = p[1] * (u[2] - u[1])
du[2] = u[1] * (p[2] - u[3]) - u[2]
du[3] = u[1] * u[2] - p[3] * u[3]
end
#solve and take data
prob = ODEProblem(lorenz, u0, tspan, p)
data = solve(prob, ABM54(), dt=0.02)
prob = ODEProblem(lorenz, u0, tspan, p)
data = solve(prob, ABM54(), dt = 0.02)

shift = 300
train_len = 5000
predict_len = 1250

#one step ahead for generative prediction
input_data = data[:, shift:shift+train_len-1]
target_data = data[:, shift+1:shift+train_len]
input_data = data[:, shift:(shift + train_len - 1)]
target_data = data[:, (shift + 1):(shift + train_len)]

test = data[:,shift+train_len:shift+train_len+predict_len-1]
test = data[:, (shift + train_len):(shift + train_len + predict_len - 1)]
```

Now that we have the data we can initialize the ESN with the chosen parameters. Given that this is a quick example we are going to change the least amount of possible parameters. For more detailed examples and explanations of the functions please refer to the documentation.

```julia
res_size = 300
esn = ESN(input_data;
reservoir = RandSparseReservoir(res_size, radius=1.2, sparsity=6/res_size),
input_layer = WeightedLayer(),
nla_type = NLAT2())
esn = ESN(input_data;
reservoir = RandSparseReservoir(res_size, radius = 1.2, sparsity = 6 / res_size),
input_layer = WeightedLayer(),
nla_type = NLAT2())
```

The echo state network can now be trained and tested. If not specified, the training will always be Ordinary Least Squares regression. The full range of training methods is detailed in the documentation.

```julia
output_layer = train(esn, target_data)
output = esn(Generative(predict_len), output_layer)
```

The data is returned as a matrix, `output` in the code above, that contains the predicted trajectories. The results can now be easily plotted (for the actual script used to obtain this plot please refer to the documentation):

```julia
using Plots
plot(transpose(output),layout=(3,1), label="predicted")
plot!(transpose(test),layout=(3,1), label="actual")
plot(transpose(output), layout = (3, 1), label = "predicted")
plot!(transpose(test), layout = (3, 1), label = "actual")
```

![lorenz_basic](https://user-images.githubusercontent.com/10376688/166227371-8bffa318-5c49-401f-9c64-9c71980cb3f7.png)

One can also visualize the phase space of the attractor and the comparison with the actual one:

```julia
plot(transpose(output)[:,1], transpose(output)[:,2], transpose(output)[:,3], label="predicted")
plot!(transpose(test)[:,1], transpose(test)[:,2], transpose(test)[:,3], label="actual")
plot(transpose(output)[:, 1],
transpose(output)[:, 2],
transpose(output)[:, 3],
label = "predicted")
plot!(transpose(test)[:, 1], transpose(test)[:, 2], transpose(test)[:, 3], label = "actual")
```

![lorenz_attractor](https://user-images.githubusercontent.com/10376688/81470281-5a34b580-91ea-11ea-9eea-d2b266da19f4.png)

## Citing
Expand Down
15 changes: 7 additions & 8 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,12 @@ ENV["GKSwstype"] = "100"
include("pages.jl")

makedocs(modules = [ReservoirComputing],
sitename = "ReservoirComputing.jl",
clean = true, doctest = false, linkcheck = true,
warnonly = [:missing_docs],
format = Documenter.HTML(
assets = ["assets/favicon.ico"],
canonical = "https://docs.sciml.ai/ReservoirComputing/stable/"),
pages = pages)
sitename = "ReservoirComputing.jl",
clean = true, doctest = false, linkcheck = true,
warnonly = [:missing_docs],
format = Documenter.HTML(assets = ["assets/favicon.ico"],
canonical = "https://docs.sciml.ai/ReservoirComputing/stable/"),
pages = pages)

deploydocs(repo = "github.com/SciML/ReservoirComputing.jl.git";
push_preview = true)
push_preview = true)
28 changes: 14 additions & 14 deletions docs/pages.jl
Original file line number Diff line number Diff line change
@@ -1,21 +1,21 @@
pages = [
"ReservoirComputing.jl" => "index.md",
"General Settings" => Any["Changing Training Algorithms" => "general/different_training.md",
"Altering States" => "general/states_variation.md",
"Generative vs Predictive" => "general/predictive_generative.md"],
"Altering States" => "general/states_variation.md",
"Generative vs Predictive" => "general/predictive_generative.md"],
"Echo State Network Tutorials" => Any["Lorenz System Forecasting" => "esn_tutorials/lorenz_basic.md",
#"Mackey-Glass Forecasting on GPU" => "esn_tutorials/mackeyglass_basic.md",
"Using Different Layers" => "esn_tutorials/change_layers.md",
"Using Different Reservoir Drivers" => "esn_tutorials/different_drivers.md",
#"Using Different Training Methods" => "esn_tutorials/different_training.md",
"Deep Echo State Networks" => "esn_tutorials/deep_esn.md",
"Hybrid Echo State Networks" => "esn_tutorials/hybrid.md"],
#"Mackey-Glass Forecasting on GPU" => "esn_tutorials/mackeyglass_basic.md",
"Using Different Layers" => "esn_tutorials/change_layers.md",
"Using Different Reservoir Drivers" => "esn_tutorials/different_drivers.md",
#"Using Different Training Methods" => "esn_tutorials/different_training.md",
"Deep Echo State Networks" => "esn_tutorials/deep_esn.md",
"Hybrid Echo State Networks" => "esn_tutorials/hybrid.md"],
"Reservoir Computing with Cellular Automata" => "reca_tutorials/reca.md",
"API Documentation" => Any["Training Algorithms" => "api/training.md",
"States Modifications" => "api/states.md",
"Prediction Types" => "api/predict.md",
"Echo State Networks" => "api/esn.md",
"ESN Layers" => "api/esn_layers.md",
"ESN Drivers" => "api/esn_drivers.md",
"ReCA" => "api/reca.md"],
"States Modifications" => "api/states.md",
"Prediction Types" => "api/predict.md",
"Echo State Networks" => "api/esn.md",
"ESN Layers" => "api/esn_layers.md",
"ESN Drivers" => "api/esn_drivers.md",
"ReCA" => "api/reca.md"],
]
18 changes: 15 additions & 3 deletions docs/src/api/esn.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,28 @@
# Echo State Networks

The core component of an ESN is the `ESN` type. It represents the entire Echo State Network and includes parameters for configuring the reservoir, input scaling, and output weights. Here's the documentation for the `ESN` type:

```@docs
ESN
```

In addition to all the components that can be explored in the documentation, a couple components need a separate introduction. The ```variation``` arguments can be
## Variations

In addition to the standard `ESN` model, there are variations that allow for deeper customization of the underlying model. Currently, there are two available variations: `Default` and `Hybrid`. These variations provide different ways to configure the ESN. Here's the documentation for the variations:

```@docs
Default
Hybrid
```

These arguments detail a deeper variation of the underlying model, and they need a separate call. For the moment, the most complex is the ```Hybrid``` call, but this can and will change in the future.
All ESN models can be trained using the following call:
The `Hybrid` variation is the most complex option and offers additional customization. Note that more variations may be added in the future to provide even greater flexibility.

## Training

To train an ESN model, you can use the `train` function. It takes the ESN model, training data, and other optional parameters as input and returns a trained model. Here's the documentation for the train function:

```@docs
train
```

With these components and variations, you can configure and train ESN models for various time series and sequential data prediction tasks.
7 changes: 5 additions & 2 deletions docs/src/api/esn_drivers.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,16 @@
# ESN Drivers

```@docs
RNN
MRNN
GRU
```
The ```GRU``` driver also provides the user with the choice of the possible variants:

The `GRU` driver also provides the user with the choice of the possible variants:

```@docs
FullyGated
Minimal
```
Please refer to the original papers for more detail about these architectures.

Please refer to the original papers for more detail about these architectures.
16 changes: 13 additions & 3 deletions docs/src/api/esn_layers.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# ESN Layers

## Input Layers

```@docs
WeightedLayer
DenseLayer
Expand All @@ -9,16 +10,22 @@
MinimumLayer
NullLayer
```
The signs in the ```MinimumLayer``` are chosen based on the following methods:

The signs in the `MinimumLayer` are chosen based on the following methods:

```@docs
BernoulliSample
IrrationalSample
```

To derive the matrix one can call the following function:

```@docs
create_layer
```
To create new input layers, it suffices to define a new struct containing the needed parameters of the new input layer. This struct will need to be an ```AbstractLayer```, so the ```create_layer``` function can be dispatched over it. The workflow should follow this snippet:

To create new input layers, it suffices to define a new struct containing the needed parameters of the new input layer. This struct will need to be an `AbstractLayer`, so the `create_layer` function can be dispatched over it. The workflow should follow this snippet:

```julia
#creation of the new struct for the layer
struct MyNewLayer <: AbstractLayer
Expand All @@ -32,6 +39,7 @@ end
```

## Reservoirs

```@docs
RandSparseReservoir
PseudoSVDReservoir
Expand All @@ -43,11 +51,13 @@ end
```

Like for the input layers, to actually build the matrix of the reservoir, one can call the following function:

```@docs
create_reservoir
```

To create a new reservoir, the procedure is similar to the one for the input layers. First, the definition of the new struct of type ```AbstractReservoir``` with the reservoir parameters is needed. Then the dispatch over the ```create_reservoir``` function makes the model actually build the reservoir matrix. An example of the workflow is given in the following snippet:
To create a new reservoir, the procedure is similar to the one for the input layers. First, the definition of the new struct of type `AbstractReservoir` with the reservoir parameters is needed. Then the dispatch over the `create_reservoir` function makes the model actually build the reservoir matrix. An example of the workflow is given in the following snippet:

```julia
#creation of the new struct for the reservoir
struct MyNewReservoir <: AbstractReservoir
Expand Down
1 change: 1 addition & 0 deletions docs/src/api/predict.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Prediction Types

```@docs
Generative
Predictive
Expand Down
4 changes: 3 additions & 1 deletion docs/src/api/reca.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,13 @@
# Reservoir Computing with Cellular Automata

```@docs
RECA
```

The input encodings are the equivalent of the input matrices of the ESNs. These are the available encodings:

```@docs
RandomMapping
```

The training and prediction follow the same workflow as the ESN. It is important to note that currently we were unable to find any papers using these models with a ```Generative``` approach for the prediction, so full support is given only to the ```Predictive``` method.
The training and prediction follow the same workflow as the ESN. It is important to note that currently we were unable to find any papers using these models with a `Generative` approach for the prediction, so full support is given only to the `Predictive` method.
2 changes: 2 additions & 0 deletions docs/src/api/states.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# States Modifications

## Padding and Estension

```@docs
StandardStates
ExtendedStates
Expand All @@ -9,6 +10,7 @@
```

## Non Linear Transformations

```@docs
NLADefault
NLAT1
Expand Down
5 changes: 4 additions & 1 deletion docs/src/api/training.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,16 @@
# Training Algorithms

## Linear Models

```@docs
StandardRidge
LinearModel
```

## Gaussian Regression

Currently, v0.9 is unavailable.

## Support Vector Regression
Support Vector Regression is possible using a direct call to [LIBSVM](https://github.com/JuliaML/LIBSVM.jl) regression methods. Instead of a wrapper, please refer to the use of ```LIBSVM.AbstractSVR``` in the original library.

Support Vector Regression is possible using a direct call to [LIBSVM](https://github.com/JuliaML/LIBSVM.jl) regression methods. Instead of a wrapper, please refer to the use of `LIBSVM.AbstractSVR` in the original library.
Loading