Skip to content

Commit

Permalink
some docs work
Browse files Browse the repository at this point in the history
  • Loading branch information
MartinuzziFrancesco committed Dec 11, 2023
1 parent d35d844 commit e184098
Show file tree
Hide file tree
Showing 6 changed files with 466 additions and 221 deletions.
93 changes: 18 additions & 75 deletions docs/src/index.md
Original file line number Diff line number Diff line change
@@ -1,103 +1,46 @@
# ReservoirComputing.jl

ReservoirComputing.jl provides an efficient, modular, and easy to use implementation of Reservoir Computing models such as Echo State Networks (ESNs). Reservoir Computing (RC) is an umbrella term used to describe a family of models such as ESNs and Liquid State Machines (LSMs). The key concept is to expand the input data into a higher dimension and use regression to train the model; in some ways, Reservoir Computers can be considered similar to kernel methods.

ReservoirComputing.jl is a versatile and user-friendly Julia package designed for the implementation of advanced Reservoir Computing models, such as Echo State Networks (ESNs). Central to Reservoir Computing is the expansion of input data into a higher-dimensional space, leveraging regression techniques for effective model training. This approach bears resemblance to kernel methods, offering a unique perspective in machine learning. ReservoirComputing.jl offers a modular design, ensuring both ease of use for newcomers and flexibility for advanced users, establishing it as a key tool for innovative computing solutions.

!!! info "Introductory material"
This library assumes some basic knowledge of Reservoir Computing. For a good introduction, we suggest the following papers: the first two are the seminal papers about ESN and LSM, the others are in-depth review papers that should cover all the needed information. For the majority of the algorithms implemented in this library, we cited in the documentation the original work introducing them. If you ever have doubts about a method or a function, just type ```? function``` in the Julia REPL to read the relevant notes.
This library assumes some basic knowledge of Reservoir Computing. For a good introduction, we suggest the following papers: the first two are the seminal papers about ESN and LSM, the others are in-depth review papers that should cover all the needed information. For the majority of the algorithms implemented in this library we cited in the documentation the original work introducing them. If you ever are in doubt about a method or a function just type ```? function``` in the Julia REPL to read the relevant notes.

* Jaeger, Herbert: The “echo state” approach to analyzing and training recurrent neural networks-with an erratum note.
* Maass W, Natschläger T, Markram H: Real-time computing without stable states: a new framework for neural computation based on perturbations.
* Lukoševičius, Mantas: A practical guide to applying echo state networks." Neural networks: Tricks of the trade.
* Lukoševičius, Mantas, and Herbert Jaeger: Reservoir computing approaches to recurrent neural network training.

!!! info "Performance tip"
For faster computations on the CPU, it is suggested to add `using MKL` to the script. For clarity's sake, this library will not be indicated under every example in the documentation.
For faster computations on the CPU it is suggested to add `using MKL` to the script. For clarity's sake this library will not be indicated under every example in the documentation.
## Installation
To install ReservoirComputing.jl, ensure you have Julia version 1.6 or higher. Follow these steps:

To install ReservoirComputing.jl, use the Julia package manager:
1. Open the Julia command line.
2. Enter the Pkg REPL mode by pressing ].
3. Type add ReservoirComputing and press Enter.

For a more customized installation or to contribute to the package, consider cloning the repository:
```julia
using Pkg
Pkg.add("ReservoirComputing")
Pkg.clone("https://github.com/SciML/ReservoirComputing.jl.git")
```
The support for this library is for Julia v1.6 or greater.
or `dev` the package.

## Features Overview

This library provides multiple ways of training the chosen RC model. More specifically, the available algorithms are:
- ```StandardRidge```: a naive implementation of Ridge Regression. The default choice for training.
- ```LinearModel```: a wrap around [MLJLinearModels](https://juliaai.github.io/MLJLinearModels.jl/stable/).
- ```LIBSVM.AbstractSVR```: a direct call of [LIBSVM](https://github.com/JuliaML/LIBSVM.jl) regression methods.

Also provided are two different ways of making predictions using RC:
- ```Generative```: the algorithm uses the prediction of the model in the previous step to continue the prediction. It only needs the number of steps as input.
- ```Predictive```: standard Machine Learning type of prediction. Given the features, the RC model will return the label/prediction.

It is possible to modify the RC obtained states in the training and prediction steps using the following:
- ```StandardStates```: default choice, no changes will be made to the states.
- ```ExtendedStates```: the states are extended using a vertical concatenation, with the input data.
- ```PaddedStates```: the states are padded using a vertical concatenation with the chosen padding value.
- ```PaddedExtendedStates```: a combination of the first two. First, the states are extended and then padded.

In addition, another modification is possible through the choice of non-linear algorithms:
- ```NLADefault```: default choice, no changes will be made to the states.
- ```NLAT1```
- ```NLAT2```
- ```NLAT3```

### Echo State Networks
For ESNs the following input layers are implemented :
- ```WeightedLayer```: weighted layer matrix with weights sampled from a uniform distribution.
- ```DenseLayer```: dense layer matrix with weights sampled from a uniform distribution.
- ```SparseLayer```: sparse layer matrix with weights sampled from a uniform distribution.
- ```MinimumLayer```: matrix with constant weights and weight sign decided following one of the two:
- ```BernoulliSample```
- ```IrrationalSample```
- ```InformedLayer```: special kin of weighted layer matrix for Hybrid ESNs.

The package also contains multiple implementations of Reservoirs:
- ```RandSparseReservoir```: random sparse matrix with scaling of spectral radius
- ```PseudoSVDReservoir```: Pseudo SVD construction of a random sparse matrix
- ```DelayLineReservoir```: minimal matrix with chosen weights
- ```DelayLineBackwardReservoir```: minimal matrix with chosen weights
- ```SimpleCycleReservoir```: minimal matrix with chosen weights
- ```CycleJumpsReservoir```: minimal matrix with chosen weights

In addition, multiple ways of driving the reservoir states are also provided:
- ```RNN```: standard Recurrent Neural Network driver.
- ```MRNN```: Multiple RNN driver, it consists of a linear combination of RNNs
- ```GRU```: gated Recurrent Unit driver, with all the possible GRU variants available:
- ```FullyGated```
- ```Minimal```

A hybrid version of the model is also available through ```Hybrid```

### Reservoir Computing with Cellular Automata
The package provides also an implementation of Reservoir Computing models based on one dimensional Cellular Automata through the ```RECA``` call. For the moment, the only input encoding available (an input encoding plays a similar role to the input matrix for ESNs) is a random mapping, called through ```RandomMapping```.

All the training methods described above can be used, as can all the modifications to the states. Both prediction methods are also possible in theory, although in the literature only ```Predictive``` tasks have been explored.
- **Multiple Training Algorithms**: Supports Ridge Regression, Linear Models, and LIBSVM regression methods for Reservoir Computing models.
- **Diverse Prediction Methods**: Offers both generative and predictive methods for Reservoir Computing predictions.
- **Modifiable Training and Prediction**: Allows modifications in Reservoir Computing states, such as state extension, padding, and combination methods.
- **Non-linear Algorithm Options**: Includes options for non-linear modifications in algorithms.
- **Echo State Networks (ESNs)**: Features various input layers, reservoirs, and methods for driving ESN reservoir states.
- **Cellular Automata-Based Reservoir Computing**: Introduces models based on one-dimensional Cellular Automata for Reservoir Computing.

## Contributing

- Please refer to the
[SciML ColPrac: Contributor's Guide on Collaborative Practices for Community Packages](https://github.com/SciML/ColPrac/blob/master/README.md)
for guidance on PRs, issues, and other matters relating to contributing to SciML.

- See the [SciML Style Guide](https://github.com/SciML/SciMLStyle) for common coding practices and other style decisions.
- There are a few community forums:

+ The #diffeq-bridged and #sciml-bridged channels in the
[Julia Slack](https://julialang.org/slack/)
+ The #diffeq-bridged and #sciml-bridged channels in the
[Julia Zulip](https://julialang.zulipchat.com/#narrow/stream/279055-sciml-bridged)
+ On the [Julia Discourse forums](https://discourse.julialang.org)
+ See also [SciML Community page](https://sciml.ai/community/)

Contributions to ReservoirComputing.jl are highly encouraged and appreciated. Whether it's through implementing new RC model variations, enhancing documentation, adding examples, or any improvement, your contribution is valuable. We welcome posts of relevant papers or ideas in the issues section. For deeper insights into the library's functionality, the API section in the documentation is a great resource. For any queries not suited for issues, please reach out to the lead developers via Slack or email.

## Citing

If you use this library in your work, please cite:
If you use ReservoirComputing.jl in your work, we kindly ask you to cite it. Here is the BibTeX entry for your convenience:

```bibtex
@article{JMLR:v23:22-0611,
Expand Down
113 changes: 82 additions & 31 deletions src/esn/echostatenetwork.jl
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,9 @@ end
"""
Default()
Sets the type of the ESN as the standard model. No parameters are needed.
The `Default` struct specifies the use of the standard model in Echo State Networks (ESNs).
It requires no parameters and is used when no specific variations or customizations of the ESN model are needed.
This struct is ideal for straightforward applications where the default ESN settings are sufficient.
"""
struct Default <: AbstractVariation end
struct Hybrid{T, K, O, I, S, D} <: AbstractVariation
Expand All @@ -31,11 +33,24 @@ end
"""
Hybrid(prior_model, u0, tspan, datasize)
Given the model parameters, returns an ```Hybrid``` variation of the ESN. This entails
a different training and prediction. Construction based on [1].
Constructs a `Hybrid` variation of Echo State Networks (ESNs) integrating a knowledge-based model
(`prior_model`) with ESNs for advanced training and prediction in chaotic systems.
[1] Jaideep Pathak et al. "Hybrid Forecasting of Chaotic Processes: Using Machine
Learning in Conjunction with a Knowledge-Based Model" (2018)
# Parameters
- `prior_model`: A knowledge-based model function for integration with ESNs.
- `u0`: Initial conditions for the model.
- `tspan`: Time span as a tuple, indicating the duration for model operation.
- `datasize`: The size of the data to be processed.
# Returns
- A `Hybrid` struct instance representing the combined ESN and knowledge-based model.
This method is effective for chaotic processes as highlighted in [^Pathak].
Reference:
[^Pathak]: Jaideep Pathak et al.
"Hybrid Forecasting of Chaotic Processes:
Using Machine Learning in Conjunction with a Knowledge-Based Model" (2018).
"""
function Hybrid(prior_model, u0, tspan, datasize)
trange = collect(range(tspan[1], tspan[2], length = datasize))
Expand All @@ -47,28 +62,33 @@ function Hybrid(prior_model, u0, tspan, datasize)
end

"""
ESN(train_data;
variation = Default(),
input_layer = DenseLayer(),
reservoir = RandSparseReservoir(),
bias = NullLayer(),
reservoir_driver = RNN(),
nla_type = NLADefault(),
states_type = StandardStates())
(esn::ESN)(prediction::AbstractPrediction,
output_layer::AbstractOutputLayer;
initial_conditions=output_layer.last_value,
last_state=esn.states[:, end])
Constructor for the Echo State Network model. It requires the reservoir size as the input
and the data for the training. It returns a struct ready to be trained with the states
already harvested.
After the training, this struct can be used for the prediction following the second
function call. This will take as input a prediction type and the output layer from the
training. The ```initial_conditions``` and ```last_state``` parameters can be left as
they are, unless there is a specific reason to change them. All the components are
detailed in the API documentation. More examples are given in the general documentation.
ESN(train_data; kwargs...) -> ESN
Creates an Echo State Network (ESN) using specified parameters and training data, suitable for various machine learning tasks.
# Parameters
- `train_data`: Matrix of training data (columns as time steps, rows as features).
- `variation`: Variation of ESN (default: `Default()`).
- `input_layer`: Input layer of ESN (default: `DenseLayer()`).
- `reservoir`: Reservoir of the ESN (default: `RandSparseReservoir(100)`).
- `bias`: Bias vector for each time step (default: `NullLayer()`).
- `reservoir_driver`: Mechanism for evolving reservoir states (default: `RNN()`).
- `nla_type`: Non-linear activation type (default: `NLADefault()`).
- `states_type`: Format for storing states (default: `StandardStates()`).
- `washout`: Initial time steps to discard (default: `0`).
- `matrix_type`: Type of matrices used internally (default: type of `train_data`).
# Returns
- An initialized ESN instance with specified parameters.
# Examples
```julia
using ReservoirComputing
train_data = rand(10, 100) # 10 features, 100 time steps
esn = ESN(train_data, reservoir=RandSparseReservoir(200), washout=10)
```
"""
function ESN(train_data;
variation = Default(),
Expand Down Expand Up @@ -187,11 +207,42 @@ end

#training dispatch on esn
"""
train(esn::AbstractEchoStateNetwork, target_data, training_method=StandardRidge(0.0))
train(esn::AbstractEchoStateNetwork, target_data, training_method = StandardRidge(0.0))
Trains an Echo State Network (ESN) using the provided target data and a specified training method.
# Parameters
- `esn::AbstractEchoStateNetwork`: The ESN instance to be trained.
- `target_data`: Supervised training data for the ESN.
- `training_method`: The method for training the ESN (default: `StandardRidge(0.0)`).
# Returns
- The trained ESN model. Its type and structure depend on `training_method` and the ESN's implementation.
# Returns
The trained ESN model. The exact type and structure of the return value depends on the
`training_method` and the specific ESN implementation.
```julia
using ReservoirComputing
# Initialize an ESN instance and target data
esn = ESN(train_data, reservoir=RandSparseReservoir(200), washout=10)
target_data = rand(size(train_data, 2))
# Train the ESN using the default training method
trained_esn = train(esn, target_data)
# Train the ESN using a custom training method
trained_esn = train(esn, target_data, training_method=StandardRidge(1.0))
```
Training of the built ESN over the ```target_data```. The default training method is
RidgeRegression. The output is an ```OutputLayer``` object to be fed to the esn call
for the prediction.
# Notes
- When using a `Hybrid` variation, the function extends the state matrix with data from the
physical model included in the `variation`.
- The training is handled by a lower-level `_train` function which takes the new state matrix
and performs the actual training using the specified `training_method`.
"""
function train(esn::AbstractEchoStateNetwork,
target_data,
Expand Down
Loading

0 comments on commit e184098

Please sign in to comment.