You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I´m running the test dataset that is available from the same source. After the optimization process when the model tries to run the best model I´m getting the following error:
** Starting FNN computation for catchment test_catchment ***
Mean and standard deviation used for feature scaling are saved under test_catchment/FNN/standard_FNN/scaling_values.csv
Using existing scores as initial grid for the Bayesian Optimization
Bayesian Hyperparameter Optimization:
40 iterations were already computed
Run the best performing model as ensemble:
2022-02-02 13:06:51.059425: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2022-02-02 13:06:51.060834: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Loaded Tensorflow version 2.7.0
Error in py_call_impl(callable, dots$args, dots$keywords) :
TypeError: Exception encountered when calling layer "alpha_dropout" (type AlphaDropout).
'>' not supported between instances of 'dict' and 'float'
Call arguments received:
• inputs=tf.Tensor(shape=(None, 42), dtype=float32)
• training=None
In addition: Warning message:
In if (dropout_layers) { :
the condition has length > 1 and only the first element will be used
This are the parameters for the best model:
layers = 3
units = 200
max_epoc = 100
early_stopping_patience = 5
batch_size = 60
dropout = 2.22044604925031E-16
ensemble =1
Can this problem be related with the very small value of dropout =2.22044604925031E-16 ?
Thank you
The text was updated successfully, but these errors were encountered:
mcvta
changed the title
Problem running the best model after hyperparameter optimization
Problem running the best model (Feed-Forward Neural Network) after hyperparameter optimization
Feb 2, 2022
Hi everyone,
I´m running the Feed-Forward Neural Network (FNN) with R (4.1.2) and tensorflow (2.7.0) that is available from: https://github.com/MoritzFeigl/wateRtemp.
I´m running the test dataset that is available from the same source. After the optimization process when the model tries to run the best model I´m getting the following error:
** Starting FNN computation for catchment test_catchment ***
Mean and standard deviation used for feature scaling are saved under test_catchment/FNN/standard_FNN/scaling_values.csv
Using existing scores as initial grid for the Bayesian Optimization
Bayesian Hyperparameter Optimization:
40 iterations were already computed
Run the best performing model as ensemble:
2022-02-02 13:06:51.059425: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2022-02-02 13:06:51.060834: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Loaded Tensorflow version 2.7.0
Error in py_call_impl(callable, dots$args, dots$keywords) :
TypeError: Exception encountered when calling layer "alpha_dropout" (type AlphaDropout).
'>' not supported between instances of 'dict' and 'float'
Call arguments received:
• inputs=tf.Tensor(shape=(None, 42), dtype=float32)
• training=None
In addition: Warning message:
In if (dropout_layers) { :
the condition has length > 1 and only the first element will be used
This is the code that I´m using to run the model:
This are the parameters for the best model:
layers = 3
units = 200
max_epoc = 100
early_stopping_patience = 5
batch_size = 60
dropout = 2.22044604925031E-16
ensemble =1
Can this problem be related with the very small value of dropout =2.22044604925031E-16 ?
Thank you
The text was updated successfully, but these errors were encountered: