-
Notifications
You must be signed in to change notification settings - Fork 313
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EHVI & NEHVI break with more than 7 objectives #2387
Comments
This reproduces. Thanks for reporting. Weird bug! |
One thing to note is that HV-based acquisition functions generally don't scale well to problems with many objectives. 2-3 is generally fine, for 4+ you'll likely see a pretty substantial slowdown and/or memory explosion because of how complex the box decompositions of the Pareto set become. In cases with 8 objective such as yours you'll likely want to either express some of the objectives as constraints instead (if that's possible), drop them from the optimization, or use a different acquisition function such as qParEGO (which will scale better but is less sample efficient). |
I started looking into this, and the bug seems to stem from hypervolume computations starting to use zero cells once m>7, because this check for Pareto dominance always evaluates to |
cc @sdaulton |
Hi, I wanted to ask if there are any updates on this issue. Cheers! |
I'm afraid I haven't made any progress on this, but it remains a bug we want to understand. |
@esantorella, @Balandat, my sense is that we may want to validate against too many objectives in (n)EHVI and simply not allow this behavior, as users are likely best served by a) converting some of their objectives into constraints or b) using parEGO if they do indeed have this many objectives, is that right? |
That is correct @lena-kashtelyan , people should not be using EHVI-based methods for 7+ dimensional objectives. I am not sure what the default approximation values are for the approximate HV computation, but if we are sufficiently aggressive (zeta=1e-3) at higher dimensions (say M=4 or M=5) then it could be reasonably fast relative to ParEGO (see p29 of https://arxiv.org/pdf/2006.05078). I would recommend making sure that we kick into more aggressive approximation for higher dimensionalities, and for anything 6 or higher default to ParEGO and throw a warning. @schmoelder can you tell us a little bit more about your use case? MOO tends to be less useful and sample efficient when you have so many objective, since the area of the frontier increases exponentially with the objectives and ultimately people are interested in just a few "good" tradeoffs. Many people have legit reasons for wanting to optimize so many objectives, and we've developed for using preference-based feedback to do the search more efficiently than multi-objective Bayesian optimization (paper @ https://arxiv.org/pdf/2203.11382, code @ https://botorch.org/tutorials/bope) |
We include Ax as one of multiple optimizers in a process optimization tool (CADET-Process). One of our users reported the issue at hand and @ronald-jaepel was just relaying this with a stripped down MRE. We also wouldn't recommend using EHVI with this many objectives, but I don't know much about their specific use case. And as maintainer of CADET-Process, I mostly care about stability of the tool, not so much about the "why". To avoid the crash, I could add an internal check to catch this before any optimization is started, but having a fix upstream would be preferable. Please let us know if we can help with anything. |
Ideally we should have Ax select EHVI for 2 objectives, EHVI with approx
HVI for 3 maybe 4 (would sam on this), and then go to Parego for more. If
it’s possible to throw a warning that doing >=5 objective optimization is
silly, and recommend constraints or BOPE, that could also be nice.
…On Thu, Dec 5, 2024 at 5:06 AM Johannes Schmölder ***@***.***> wrote:
We include Ax as one of multiple optimizers in a process optimization tool
(CADET-Process).
Our users can setup their process models / optimization problems and
combine them with any of the provided optimizers, since they are internally
translated to the individual APIs of the different libraries.
One of our users reported the issue at hand and @ronald-jaepel
<https://github.com/ronald-jaepel> was just relaying this with a stripped
down MRE. We also wouldn't recommend using EHVI with this many objectives,
but I don't know much about their specific use case. And as maintainer of
CADET-Process, I mostly care about stability of the tool, not so much about
the "why".
To avoid the crash, I could add an internal check to catch this before any
optimization is started, but having a fix upstream would be preferable.
Please let us know if we can help with anything.
—
Reply to this email directly, view it on GitHub
<#2387 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAW34MLUUFJIYPUIC5L3IT2EAQS7AVCNFSM6AAAAABGRT6B5WVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMJZHAZTEOJXG4>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Hi Eytan, I understand your reasoning, but the bug in Ax is independent from the question whether it is actually useful to have that many objectives or not, right? There seems to be some underlying structural issue in the code. Simply "catching" user configuration to avoid the crash does not really fix this. But I'm also not a maintainer of Ax and I understand that resources are always limited and this is not a super pressing issue. This is just my personal opinion and I will respect any decision on that matter. |
The bug here is that we use this heuristic in Ax to determine the approximation level that should be used in the approximate box decomposition when there are many objectives: https://github.com/pytorch/botorch/blob/5012fe8a39b434e1b0f3d3a968eb17b3dd0c9e27/botorch/acquisition/multi_objective/utils.py#L63 If there are 8 or more objective this breaks. I will update this raise a warning and clamp the maximum alpha value to be < 1. |
Hello Ax Team,
when running EHVI or NEHVI with more than 7 objectives, we get an error during the evaluation of the objective function.
Here's an MRE:
and here's the full traceback:
The text was updated successfully, but these errors were encountered: