-
Notifications
You must be signed in to change notification settings - Fork 397
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tuner's Oracle paratemer "run_times" differs from user input "executions_per_trial" #1022
Comments
Hi @jsaladich,
The current code returns (i added comments): Open JSON{
"trial_id": "00",
"metrics": { # <--- note that this key is duplicated, is it a bug?
"metrics": { # <--- Im including loss only, but all look the same.
"loss": {
"direction": "min",
"observations": [ #<-- before it was a single object
{
"value": [
2.3045783042907715
],
"step": 0
},
# (...)
{
"value": [
2.3027408123016357
],
"step": 2
}
]
},
},
# (...)
} |
Hi @ghsanti thanks a lot for your exhaustive response, and sorry for the delay in the response. Before answering you I need to know, the json you just posted is showing loss metric per step (i.e. epochs). That is great, but as far as I remember (haven't used KT much recently) the KT engine selects the best step. My concern is about the Of course, having full traceability (i.e. metrics per each step and per each execution) would be the best. Please, let me know if we are in the same page Thanks a lot! |
Working on PR (see below), feel free to comment. |
Hi @ghsanti amazing job and sorry for not following up (but belive me, I read you). |
Thanks! No rush @jsaladich; do it whenever you can, and if you cant it's fine as well. |
@ghsanti i would never miss such opportunity!! |
Hi @ghsanti sorry for the delay, just run some dummy KT optimizations: Assuming the user asked for In Finally, I understand that Let me know if my explanation is understandable! Thanks a lot for your time and patience! |
Hi, changes are in my fork only; it wont be wanted here bc i removed all backwards compatibility. They may still want to support it here (but I don't see anyone replying.) The fork aims for keras>=3.5 and tf>=2.17. (Note that it's not finished, but it may work for simple projects.) Here I included sample-outputs. (I think some of those you mentioned are fixed.) |
Yes, I belive that will help a lot any user of KT. P.S: Quick question, shouldn't the |
You are welcome 🤗
That's a valid point, currently it's logged that way for simplicity i.e just keeps one best-overall value (until i get the rest working reliably); I'll take a closer look at it during the next week, once i fix some failing tests. Feel free to open a discussion or issue in my fork as well, for any other changes. |
Hi KerasTuner team!
Describe the bug
I ran an experiment with
keras_tuner.BayesianOptimization
in whichexecutions_per_trial=3
. When I check the file./oracle.json
I realize that the fieldrun_times
is always equal to 1.Moreover, the files
./../trial.json
of each trial only return 1 best score and a single value in metric.Expected behavior
I would expect two things to behave differently:
oracle.json
file should return each trial withrun_times=3
if the user requestedexecutions_per_trial=3
in the configurationtrial.json
file should return a list of lenghtlen(executions_per_trial)
containing the scores / metrics for each execution per trial, so the user can analyze better the algorithm.Am I missing something or this is how it works?
Thanks!
The text was updated successfully, but these errors were encountered: