Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LightGBM integration doesn't work properly if you are trying to maximize a custom metric #141

Open
volker48 opened this issue Jul 19, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@volker48
Copy link

Expected behavior

If the eval metric has is_higher_better set to True then the objective should be maximized.

Environment

  • Optuna version:
  • Optuna Integration version:
  • Python version:
  • OS:
  • (Optional) Other libraries and their versions:

Error messages, stack traces, or logs

def higher_is_better(self) -> bool:
        metric_name = self.lgbm_params.get("metric", "binary_logloss")
        return metric_name in ("auc", "auc_mu", "ndcg", "map", "average_precision")

This code is totally incorrect if someone is using a custom evaluation metric.

I only found out about this bug after wasting several hours tuning and the best parameters that were returned being obviously nonsense.

I then tried explicitly creating a study with direction set to 'maximize' and at least hit a warning message.



### Steps to reproduce

1. Setup a study to maximize the optimization
2. Create a custom `feval` callback function
3. In the params explicitly set the metric to something other than ("auc", "auc_mu", "ndcg", "map", "average_precision")
4. In the call to optuna_integration.lightgbm.train, set the `feval` callable to the function created in step 2
```python
def score_cb(preds, eval_data):
    score = calculate_score(preds, eval_data.label)
    return 'score', score, True

lgb_study = optuna.create_study(direction="maximize", study_name="LightGBM Auto Tune")

params = {
    "objective": "regression",
    "metric": "correlation",
    "boosting_type": "gbdt",
}

model = opt_lgb.train(params, dtrain, study=lgb_study, num_boost_round=2000, valid_sets=[dtrain, dval], valid_names=["training", "validation"], feval=score_cb, callbacks=[opt_lgb.early_stopping(stopping_rounds=30), lgb.log_evaluation(1)])

Additional context (optional)

No response

@volker48 volker48 added the bug Something isn't working label Jul 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant