Skip to content

Commit

Permalink
Update ray-tune.md (#943)
Browse files Browse the repository at this point in the history
## Description

Update ray-tune

## Ticket

Does this PR fix an existing issue? If yes, provide a link to the ticket
here: https://wandb.atlassian.net/browse/GROWTH2-358

## Checklist

Check if your PR fulfills the following requirements. Put an `X` in the
boxes that apply.

- [X] Files I edited were previewed on my local development server with
`yarn start`. My changes did not break the local preview.
- [X] Build (`yarn docusaurus build`) was run locally and successfully
without errors or warnings.
- [X] I merged the latest changes from `main` into my feature branch
before submitting this PR.

---------

Co-authored-by: Matt Linville <[email protected]>
  • Loading branch information
ash0ts and mdlinville authored Jan 9, 2025
1 parent b629680 commit 569e0a4
Showing 1 changed file with 35 additions and 54 deletions.
89 changes: 35 additions & 54 deletions docs/guides/integrations/other/ray-tune.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,8 @@ title: Ray Tune

W&B integrates with [Ray](https://github.com/ray-project/ray) by offering two lightweight integrations.

One is the `WandbLoggerCallback`, which automatically logs metrics reported to Tune to the Wandb API. The other one is the `@wandb_mixin` decorator, which can be used with the function API. It automatically initializes the Wandb API with Tune’s training information. You can just use the Wandb API like you would normally do, e.g. using `wandb.log()` to log your training process.
- The`WandbLoggerCallback` function automatically logs metrics reported to Tune to the Wandb API.
- The `setup_wandb()` function, which can be used with the function API, automatically initializes the Wandb API with Tune's training information. You can use the Wandb API as usual. such as by using `wandb.log()` to log your training process.

## WandbLoggerCallback

Expand All @@ -21,101 +22,81 @@ The content of the wandb config entry is passed to `wandb.init()` as keyword arg

### Parameters

`api_key_file (str)` – Path to file containing the `Wandb API KEY`.
`project (str)`: Name of the Wandb project. Mandatory.

`api_key (str)` – Wandb API Key. Alternative to setting `api_key_file`.
`api_key_file (str)`: Path to file containing the Wandb API KEY.

`excludes (list)` – List of metrics that should be excluded from the `log`.
`api_key (str)`: Wandb API Key. Alternative to setting `api_key_file`.

`log_config (bool)` – Boolean indicating if the config parameter of the results dict should be logged. This makes sense if parameters will change during training, e.g. with `PopulationBasedTraining`. Defaults to False.
`excludes (list)`: List of metrics to exclude from the log.

`log_config (bool)`: Whether to log the config parameter of the results dictionary. Defaults to False.

`upload_checkpoints (bool)`: If True, model checkpoints are uploaded as artifacts. Defaults to False.

### Example

```python
from ray import tune, train
from ray.tune.logger import DEFAULT_LOGGERS
from ray.air.integrations.wandb import WandbLoggerCallback


def train_fc(config):
for i in range(10):
train.report({"mean_accuracy":(i + config['alpha']) / 10})
train.report({"mean_accuracy": (i + config["alpha"]) / 10})

search_space = {
'alpha': tune.grid_search([0.1, 0.2, 0.3]),
'beta': tune.uniform(0.5, 1.0)
}

analysis = tune.run(
tuner = tune.Tuner(
train_fc,
config=search_space,
callbacks=[WandbLoggerCallback(
project="<your-project>",
api_key="<your-name>",
log_config=True
)]
param_space={
"alpha": tune.grid_search([0.1, 0.2, 0.3]),
"beta": tune.uniform(0.5, 1.0),
},
run_config=train.RunConfig(
callbacks=[
WandbLoggerCallback(
project="<your-project>", api_key="<your-api-key>", log_config=True
)
]
),
)

best_trial = analysis.get_best_trial("mean_accuracy", "max", "last")
results = tuner.fit()
```

## wandb_mixin
## setup_wandb

```python
ray.tune.integration.wandb.wandb_mixin(func)
from ray.air.integrations.wandb import setup_wandb
```

This Ray Tune Trainable `mixin` helps initializing the Wandb API for use with the `Trainable` class or with `@wandb_mixin` for the function API.

For basic usage, just prepend your training function with the `@wandb_mixin` decorator:
This utility function helps initialize Wandb for use with Ray Tune. For basic usage, call `setup_wandb()` in your training function:

```python
from ray.tune.integration.wandb import wandb_mixin
from ray.air.integrations.wandb import setup_wandb


@wandb_mixin
def train_fn(config):
wandb.log()
```

Wandb configuration is done by passing a `wandb key` to the `config` parameter of `tune.run()` (see example below).

The content of the wandb config entry is passed to `wandb.init()` as keyword arguments. The exception are the following settings, which are used to configure the `WandbTrainableMixin` itself:

### Parameters
# Initialize wandb
wandb = setup_wandb(config)

`api_key_file (str)` – Path to file containing the Wandb `API KEY`.

`api_key (str)` – Wandb API Key. Alternative to setting `api_key_file`.

Wandb’s `group`, `run_id` and `run_name` are automatically selected by Tune, but can be overwritten by filling out the respective configuration values.

Please see the [`init()` reference](/ref/python/init/) for all other valid configuration settings.

### Example:

```python
from ray import tune
from ray.tune.integration.wandb import wandb_mixin


@wandb_mixin
def train_fn(config):
for i in range(10):
loss = self.config["a"] + self.config["b"]
loss = config["a"] + config["b"]
wandb.log({"loss": loss})
tune.report(loss=loss)


tune.run(
tuner = tune.Tuner(
train_fn,
config={
param_space={
# define search space here
"a": tune.choice([1, 2, 3]),
"b": tune.choice([4, 5, 6]),
# wandb configuration
"wandb": {"project": "Optimization_Project", "api_key_file": "/path/to/file"},
},
)
results = tuner.fit()
```

## Example Code
Expand Down

0 comments on commit 569e0a4

Please sign in to comment.