RecurrentLayers.IndRNNCell
— TypeIndRNNCell((input_size => hidden_size)::Pair, σ=relu;
+result_state = rancell(inp, (state, c_state))
diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json
index 8bf92d6..c75ba44 100644
--- a/dev/.documenter-siteinfo.json
+++ b/dev/.documenter-siteinfo.json
@@ -1 +1 @@
-{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-27T19:07:12","documenter_version":"1.8.0"}}
\ No newline at end of file
+{"documenter":{"julia_version":"1.11.2","generation_timestamp":"2024-12-02T20:06:34","documenter_version":"1.8.0"}}
\ No newline at end of file
diff --git a/dev/api/cells/index.html b/dev/api/cells/index.html
index f7ac478..17185a1 100644
--- a/dev/api/cells/index.html
+++ b/dev/api/cells/index.html
@@ -17,31 +17,31 @@
#result with default initialization of internal states
result = rancell(inp)
#result with internal states provided
-result_state = rancell(inp, (state, c_state))source Independently recurrent cell. See Arguments Equations \[\mathbf{h}_{t} = \sigma(\mathbf{W} \mathbf{x}_t + \mathbf{u} \odot \mathbf{h}_{t-1} + \mathbf{b})\] Forward Independently recurrent cell. See Arguments Equations \[\mathbf{h}_{t} = \sigma(\mathbf{W} \mathbf{x}_t + \mathbf{u} \odot \mathbf{h}_{t-1} + \mathbf{b})\] Forward Light recurrent unit. See Arguments Equations \[\begin{aligned}
\tilde{h}_t &= \tanh(W_h x_t), \\
f_t &= \delta(W_f x_t + U_f h_{t-1} + b_f), \\
h_t &= (1 - f_t) \odot h_{t-1} + f_t \odot \tilde{h}_t.
-\end{aligned}\] Forward Forward Light gated recurrent unit. The implementation does not include the batch normalization as described in the original paper. See Arguments Equations \[\begin{aligned}
z_t &= \sigma(W_z x_t + U_z h_{t-1}), \\
\tilde{h}_t &= \text{ReLU}(W_h x_t + U_h h_{t-1}), \\
h_t &= z_t \odot h_{t-1} + (1 - z_t) \odot \tilde{h}_t
-\end{aligned}\] Forward Forward Minimal gated unit. See Arguments Equations \[\begin{aligned}
f_t &= \sigma(W_f x_t + U_f h_{t-1} + b_f), \\
\tilde{h}_t &= \tanh(W_h x_t + U_h (f_t \odot h_{t-1}) + b_h), \\
h_t &= (1 - f_t) \odot h_{t-1} + f_t \odot \tilde{h}_t
-\end{aligned}\] Forward Forward Neural Architecture Search unit. See Arguments Equations \[\begin{aligned}
@@ -68,7 +68,7 @@
c_{\text{new}} &= l_1 \cdot l_2 \\
l_5 &= \tanh(l_3 + l_4) \\
h_{\text{new}} &= \tanh(c_{\text{new}} \cdot l_5)
-\end{aligned}\] Forward Forward Recurrent highway network. See Arguments Equations \[\begin{aligned}
s_{\ell}^{[t]} &= h_{\ell}^{[t]} \odot t_{\ell}^{[t]} + s_{\ell-1}^{[t]} \odot c_{\ell}^{[t]}, \\
@@ -76,9 +76,9 @@
h_{\ell}^{[t]} &= \tanh(W_h x^{[t]}\mathbb{I}_{\ell = 1} + U_{h_{\ell}} s_{\ell-1}^{[t]} + b_{h_{\ell}}), \\
t_{\ell}^{[t]} &= \sigma(W_t x^{[t]}\mathbb{I}_{\ell = 1} + U_{t_{\ell}} s_{\ell-1}^{[t]} + b_{t_{\ell}}), \\
c_{\ell}^{[t]} &= \sigma(W_c x^{[t]}\mathbb{I}_{\ell = 1} + U_{c_{\ell}} s_{\ell-1}^{[t]} + b_{c_{\ell}})
-\end{aligned}\] Forward Forward Mutated unit 1 cell. See Arguments Equations \[\begin{aligned}
@@ -86,7 +86,7 @@
r &= \sigma(W_r x_t + U_r h_t + b_r), \\
h_{t+1} &= \tanh(U_h (r \odot h_t) + \tanh(W_h x_t) + b_h) \odot z \\
&\quad + h_t \odot (1 - z).
-\end{aligned}\] Forward Forward Mutated unit 2 cell. See Arguments Equations \[\begin{aligned}
@@ -94,7 +94,7 @@
r &= \sigma(x_t + U_r h_t + b_r), \\
h_{t+1} &= \tanh(U_h (r \odot h_t) + W_h x_t + b_h) \odot z \\
&\quad + h_t \odot (1 - z).
-\end{aligned}\] Forward Forward Mutated unit 3 cell. See Arguments Equations \[\begin{aligned}
@@ -102,7 +102,7 @@
r &= \sigma(W_r x_t + U_r h_t + b_r), \\
h_{t+1} &= \tanh(U_h (r \odot h_t) + W_h x_t + b_h) \odot z \\
&\quad + h_t \odot (1 - z).
-\end{aligned}\] Forward Forward Forward Forward Peephole long short term memory cell. See Arguments Equations \[\begin{aligned}
@@ -119,4 +119,4 @@
o_t &= \sigma_g(W_o x_t + U_o c_{t-1} + b_o), \\
c_t &= f_t \odot c_{t-1} + i_t \odot \sigma_c(W_c x_t + b_c), \\
h_t &= o_t \odot \sigma_h(c_t).
-\end{aligned}\] Forward The forward pass takes the following arguments: If not provided, both ExamplesRecurrentLayers.IndRNNCell
— TypeIndRNNCell((input_size => hidden_size)::Pair, σ=relu;
+result_state = rancell(inp, (state, c_state))
RecurrentLayers.IndRNNCell
— TypeIndRNNCell((input_size => hidden_size)::Pair, σ=relu;
init_kernel = glorot_uniform,
init_recurrent_kernel = glorot_uniform,
- bias = true)
IndRNN
for a layer that processes entire sequences.input_size => hidden_size
: input and inner dimension of the layerσ
: activation function. Default is tanh
init_kernel
: initializer for the input to hidden weightsinit_recurrent_kernel
: initializer for the hidden to hidden weightsbias
: include a bias or not. Default is true
rnncell(inp, [state])
RecurrentLayers.LightRUCell
— TypeLightRUCell((input_size => hidden_size)::Pair;
+ bias = true)
IndRNN
for a layer that processes entire sequences.input_size => hidden_size
: input and inner dimension of the layerσ
: activation function. Default is tanh
init_kernel
: initializer for the input to hidden weightsinit_recurrent_kernel
: initializer for the hidden to hidden weightsbias
: include a bias or not. Default is true
rnncell(inp, [state])
RecurrentLayers.LightRUCell
— TypeLightRUCell((input_size => hidden_size)::Pair;
init_kernel = glorot_uniform,
init_recurrent_kernel = glorot_uniform,
bias = true)
LightRU
for a layer that processes entire sequences.input_size => hidden_size
: input and inner dimension of the layerinit_kernel
: initializer for the input to hidden weightsinit_recurrent_kernel
: initializer for the hidden to hidden weightsbias
: include a bias or not. Default is true
rnncell(inp, [state])
RecurrentLayers.LiGRUCell
— TypeLiGRUCell((input_size => hidden_size)::Pair;
+\end{aligned}\]
rnncell(inp, [state])
RecurrentLayers.LiGRUCell
— TypeLiGRUCell((input_size => hidden_size)::Pair;
init_kernel = glorot_uniform,
init_recurrent_kernel = glorot_uniform,
bias = true)
LiGRU
for a layer that processes entire sequences.input_size => hidden_size
: input and inner dimension of the layerinit_kernel
: initializer for the input to hidden weightsinit_recurrent_kernel
: initializer for the hidden to hidden weightsbias
: include a bias or not. Default is true
rnncell(inp, [state])
RecurrentLayers.MGUCell
— TypeMGUCell((input_size => hidden_size)::Pair;
+\end{aligned}\]
rnncell(inp, [state])
RecurrentLayers.MGUCell
— TypeMGUCell((input_size => hidden_size)::Pair;
init_kernel = glorot_uniform,
init_recurrent_kernel = glorot_uniform,
bias = true)
MGU
for a layer that processes entire sequences.input_size => hidden_size
: input and inner dimension of the layerinit_kernel
: initializer for the input to hidden weightsinit_recurrent_kernel
: initializer for the hidden to hidden weightsbias
: include a bias or not. Default is true
rnncell(inp, [state])
RecurrentLayers.NASCell
— TypeNASCell((input_size => hidden_size);
+\end{aligned}\]
rnncell(inp, [state])
RecurrentLayers.NASCell
— TypeNASCell((input_size => hidden_size);
init_kernel = glorot_uniform,
init_recurrent_kernel = glorot_uniform,
bias = true)
NAS
for a layer that processes entire sequences.input_size => hidden_size
: input and inner dimension of the layerinit_kernel
: initializer for the input to hidden weightsinit_recurrent_kernel
: initializer for the hidden to hidden weightsbias
: include a bias or not. Default is true
rnncell(inp, [state])
RecurrentLayers.RHNCell
— TypeRHNCell((input_size => hidden_size), depth=3;
+\end{aligned}\]
rnncell(inp, [state])
RecurrentLayers.RHNCell
— TypeRHNCell((input_size => hidden_size), depth=3;
couple_carry::Bool = true,
cell_kwargs...)
RHNCellUnit
for a the unit component of this layer. See RHN
for a layer that processes entire sequences.input_size => hidden_size
: input and inner dimension of the layerdepth
: depth of the recurrence. Default is 3couple_carry
: couples the carry gate and the transform gate. Default true
init_kernel
: initializer for the input to hidden weightsbias
: include a bias or not. Default is true
rnncell(inp, [state])
RecurrentLayers.RHNCellUnit
— TypeRHNCellUnit((input_size => hidden_size)::Pair;
+\end{aligned}\]
rnncell(inp, [state])
RecurrentLayers.RHNCellUnit
— TypeRHNCellUnit((input_size => hidden_size)::Pair;
init_kernel = glorot_uniform,
- bias = true)
RecurrentLayers.MUT1Cell
— TypeMUT1Cell((input_size => hidden_size);
+ bias = true)
RecurrentLayers.MUT1Cell
— TypeMUT1Cell((input_size => hidden_size);
init_kernel = glorot_uniform,
init_recurrent_kernel = glorot_uniform,
bias = true)
MUT1
for a layer that processes entire sequences.input_size => hidden_size
: input and inner dimension of the layerinit_kernel
: initializer for the input to hidden weightsinit_recurrent_kernel
: initializer for the hidden to hidden weightsbias
: include a bias or not. Default is true
rnncell(inp, [state])
RecurrentLayers.MUT2Cell
— TypeMUT2Cell((input_size => hidden_size);
+\end{aligned}\]
rnncell(inp, [state])
RecurrentLayers.MUT2Cell
— TypeMUT2Cell((input_size => hidden_size);
init_kernel = glorot_uniform,
init_recurrent_kernel = glorot_uniform,
bias = true)
MUT2
for a layer that processes entire sequences.input_size => hidden_size
: input and inner dimension of the layerinit_kernel
: initializer for the input to hidden weightsinit_recurrent_kernel
: initializer for the hidden to hidden weightsbias
: include a bias or not. Default is true
rnncell(inp, [state])
RecurrentLayers.MUT3Cell
— TypeMUT3Cell((input_size => hidden_size);
+\end{aligned}\]
rnncell(inp, [state])
RecurrentLayers.MUT3Cell
— TypeMUT3Cell((input_size => hidden_size);
init_kernel = glorot_uniform,
init_recurrent_kernel = glorot_uniform,
bias = true)
MUT3
for a layer that processes entire sequences.input_size => hidden_size
: input and inner dimension of the layerinit_kernel
: initializer for the input to hidden weightsinit_recurrent_kernel
: initializer for the hidden to hidden weightsbias
: include a bias or not. Default is true
rnncell(inp, [state])
RecurrentLayers.SCRNCell
— TypeSCRNCell((input_size => hidden_size)::Pair;
+\end{aligned}\]
rnncell(inp, [state])
RecurrentLayers.SCRNCell
— TypeSCRNCell((input_size => hidden_size)::Pair;
init_kernel = glorot_uniform,
init_recurrent_kernel = glorot_uniform,
bias = true,
@@ -110,7 +110,7 @@
s_t &= (1 - \alpha) W_s x_t + \alpha s_{t-1}, \\
h_t &= \sigma(W_h s_t + U_h h_{t-1} + b_h), \\
y_t &= f(U_y h_t + W_y s_t)
-\end{aligned}\]
rnncell(inp, [state, c_state])
RecurrentLayers.PeepholeLSTMCell
— TypePeepholeLSTMCell((input_size => hidden_size)::Pair;
+\end{aligned}\]
rnncell(inp, [state, c_state])
RecurrentLayers.PeepholeLSTMCell
— TypePeepholeLSTMCell((input_size => hidden_size)::Pair;
init_kernel = glorot_uniform,
init_recurrent_kernel = glorot_uniform,
bias = true)
PeepholeLSTM
for a layer that processes entire sequences.input_size => hidden_size
: input and inner dimension of the layerinit_kernel
: initializer for the input to hidden weightsinit_recurrent_kernel
: initializer for the hidden to hidden weightsbias
: include a bias or not. Default is true
lstmcell(x, [h, c])
x
: Input to the cell, which can be a vector of size in
or a matrix of size in x batch_size
.h
: The hidden state vector of the cell, sized out
, or a matrix of size out x batch_size
.c
: The candidate state, sized out
, or a matrix of size out x batch_size
.h
and c
default to vectors of zeros.
Settings
This document was generated with Documenter.jl version 1.8.0 on Wednesday 27 November 2024. Using Julia version 1.11.1.