Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Repeated args when iterating over the optimizer #191

Open
FelipeMLopez opened this issue Oct 17, 2022 · 0 comments
Open

Repeated args when iterating over the optimizer #191

FelipeMLopez opened this issue Oct 17, 2022 · 0 comments

Comments

@FelipeMLopez
Copy link

Hi,
I have a problem when I try to iterate over the optimizer. This is a chunk of my code:

      repeat = True
      pre_eval = []
      while repeat:
  
          search = GlobalOptimizer(
              search_func,
              lower_bounds=loaded_lb,
              upper_bounds=loaded_ub,
              evaluations=pre_eval,
              maximize=True,
              epsilon=.01
          )
  
          num_function_calls = 50
          search.run(num_function_calls)
  
          pos_eval = search.evaluations
  
          pos_eval_x = [dicc for dicc,val in pos_eval if val > 1.0]
  
          # If we don't have enough logs -> repeat
          if 20 < len(pos_eval_x):
              repeat = False
          else:
              pre_eval = search.evaluations
              print(f'\n\n\nREPEAT! {n_opt} {len(pos_eval)} {len(pos_eval_x)}\n\n\n')
  
          del(search)
          gc.collect()

Between the first and the second iteration I do not detect any problem. From the second iteration onwards the optimizer always uses the same arguments. E.g.:

Iteration #1
#1.1 args = [ 1, 2, 3 ]
#1.2 args = [ 3, 12, 5 ]
...
#1.50 args = [ 40, 1, 7 ]

Iteration #2
#2.1 args = [ 3, 7, 1 ]
#2.2 args = [ 1, 4, 6 ]
...
#2.50 args = [ 9, 5, 81 ]

Iteration #3
#3.1 args = [ 3, 7, 1 ]
#3.2 args = [ 1, 4, 6 ]
...
#3.50 args = [ 9, 5, 81 ]

Iteration #4
#4.1 args = [ 3, 7, 1 ]
#4.2 args = [ 1, 4, 6 ]
...
#4.50 args = [ 9, 5, 81 ]

I've managed to avoid the problem by indicating a new random_state at each iteration:

      search = GlobalOptimizer(
          search_func,
          lower_bounds=loaded_lb,
          upper_bounds=loaded_ub,
          evaluations=pre_eval,
          maximize=True,
          random_state=np.random.randint(1, 1000),
          epsilon=.01
      )

If I'm updating "evaluations=" by adding the new evaluations performed, why does the optimizer keep repeating the arguments?
I tried to look at the code myself but maybe you can give me a quicker answer.

Thank very much for the help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant