From 4f9a3de4d19147126be15e5cc6fda95c9fd91587 Mon Sep 17 00:00:00 2001 From: johndmulhausen Date: Tue, 14 Jan 2025 13:14:47 -0500 Subject: [PATCH 1/5] Fix malformed links --- content/guides/_index.md | 34 +-- content/guides/core/_index.md | 10 +- content/guides/core/artifacts/_index.md | 22 +- .../core/artifacts/artifacts-walkthrough.md | 12 +- .../core/artifacts/construct-an-artifact.md | 16 +- .../create-a-new-artifact-version.md | 2 +- .../artifacts/download-and-use-an-artifact.md | 8 +- .../explore-and-traverse-an-artifact-graph.md | 6 +- .../artifacts/manage-data/delete-artifacts.md | 6 +- .../core/artifacts/manage-data/storage.md | 4 +- .../guides/core/artifacts/manage-data/ttl.md | 14 +- .../core/artifacts/update-an-artifact.md | 8 +- content/guides/core/reports/_index.md | 6 +- .../core/reports/cross-project-reports.md | 2 +- content/guides/core/reports/edit-a-report.md | 2 +- .../guides/core/reports/reports-gallery.md | 2 +- content/guides/core/tables/_index.md | 10 +- content/guides/core/tables/tables-download.md | 4 +- .../guides/core/tables/tables-walkthrough.md | 8 +- content/guides/hosting/_index.md | 8 +- .../hosting/data-security/data-encryption.md | 2 +- .../hosting/data-security/ip-allowlisting.md | 4 +- .../hosting/data-security/presigned-urls.md | 4 +- .../data-security/private-connectivity.md | 4 +- .../data-security/secure-storage-connector.md | 48 ++-- content/guides/hosting/env-vars.md | 2 +- .../hosting-options/dedicated_cloud/_index.md | 20 +- .../hosting/hosting-options/saas_cloud.md | 4 +- .../hosting-options/self-managed/_index.md | 6 +- .../self-managed/bare-metal.md | 4 +- .../install-on-public-cloud/aws-tf.md | 2 +- .../install-on-public-cloud/azure-tf.md | 2 +- .../install-on-public-cloud/gcp-tf.md | 2 +- .../install-on-public-cloud/ref-arch.md | 4 +- .../kubernetes-operator/_index.md | 6 +- .../self-managed/server-upgrade-process.md | 2 +- content/guides/hosting/iam/_index.md | 8 +- .../hosting/iam/access-management/_index.md | 2 +- .../access-management/manage-organization.md | 6 +- .../guides/hosting/iam/advanced_env_vars.md | 2 +- .../iam/authentication/identity_federation.md | 2 +- .../iam/authentication/service-accounts.md | 4 +- .../guides/hosting/iam/authentication/sso.md | 8 +- content/guides/hosting/iam/automate_iam.md | 2 +- content/guides/hosting/iam/scim.md | 20 +- .../hosting/monitoring-usage/audit-logging.md | 10 +- .../hosting/monitoring-usage/org_dashboard.md | 2 +- .../monitoring-usage/prometheus-logging.md | 2 +- content/guides/hosting/privacy-settings.md | 6 +- .../guides/hosting/server-release-process.md | 2 +- content/guides/integrations/_index.md | 2 +- .../integrations/add-wandb-to-any-library.md | 4 +- .../guides/integrations/farama-gymnasium.md | 4 +- content/guides/integrations/fastai/_index.md | 2 +- content/guides/integrations/fastai/v1.md | 2 +- content/guides/integrations/huggingface.md | 14 +- content/guides/integrations/hydra.md | 4 +- content/guides/integrations/keras.md | 2 +- .../integrations/kubeflow-pipelines-kfp.md | 2 +- content/guides/integrations/langchain.md | 2 +- content/guides/integrations/lightgbm.md | 2 +- content/guides/integrations/lightning.md | 4 +- content/guides/integrations/metaflow.md | 2 +- content/guides/integrations/openai-api.md | 6 +- .../guides/integrations/openai-fine-tuning.md | 4 +- content/guides/integrations/openai-gym.md | 4 +- content/guides/integrations/prodigy.md | 2 +- content/guides/integrations/pytorch.md | 12 +- content/guides/integrations/scikit.md | 2 +- content/guides/integrations/spacy.md | 4 +- content/guides/integrations/torchtune.md | 4 +- content/guides/integrations/ultralytics.md | 8 +- content/guides/integrations/xgboost.md | 2 +- content/guides/integrations/yolov5.md | 6 +- content/guides/models/_index.md | 8 +- .../models/app/features/cascade-settings.md | 6 +- .../app/features/custom-charts/_index.md | 2 +- .../app/features/custom-charts/walkthrough.md | 2 +- .../models/app/features/panels/_index.md | 4 +- .../guides/models/app/features/panels/code.md | 2 +- .../features/panels/line-plot/reference.md | 2 +- .../app/features/panels/line-plot/sampling.md | 2 +- .../features/panels/parallel-coordinates.md | 2 +- .../guides/models/app/settings-page/_index.md | 4 +- .../models/app/settings-page/storage.md | 6 +- .../models/app/settings-page/team-settings.md | 4 +- .../guides/models/app/settings-page/teams.md | 8 +- .../models/app/settings-page/user-settings.md | 4 +- .../automations/model-registry-automations.md | 6 +- .../automations/project-scoped-automations.md | 6 +- content/guides/models/registry/_index.md | 24 +- .../models/registry/download_use_artifact.md | 2 +- .../guides/models/registry/link_version.md | 8 +- .../models/registry/model_registry/_index.md | 18 +- .../registry/model_registry/consume-models.md | 4 +- .../model_registry/link-model-version.md | 2 +- .../model_registry/log-model-to-experiment.md | 2 +- .../model-management-concepts.md | 2 +- .../registry/model_registry/walkthrough.md | 8 +- .../models/registry/model_registry_eol.md | 8 +- .../guides/models/registry/registry_types.md | 2 +- content/guides/models/sweeps/_index.md | 18 +- .../models/sweeps/add-w-and-b-to-your-code.md | 24 +- .../define-sweep-configuration/_index.md | 8 +- .../sweep-config-keys.md | 10 +- .../guides/models/sweeps/existing-project.md | 2 +- .../guides/models/sweeps/initialize-sweeps.md | 6 +- .../guides/models/sweeps/local-controller.md | 8 +- .../models/sweeps/parallelize-agents.md | 8 +- .../sweeps/pause-resume-and-cancel-sweeps.md | 4 +- .../models/sweeps/start-sweep-agents.md | 10 +- .../models/sweeps/troubleshoot-sweeps.md | 4 +- .../guides/models/sweeps/useful-resources.md | 2 +- .../models/sweeps/visualize-sweep-results.md | 10 +- content/guides/models/sweeps/walkthrough.md | 14 +- content/guides/models/track/_index.md | 16 +- content/guides/models/track/config.md | 4 +- .../models/track/environment-variables.md | 6 +- content/guides/models/track/jupyter.md | 4 +- content/guides/models/track/launch.md | 20 +- content/guides/models/track/limits.md | 8 +- content/guides/models/track/log/_index.md | 16 +- .../models/track/log/distributed-training.md | 6 +- content/guides/models/track/log/log-models.md | 20 +- .../guides/models/track/log/log-summary.md | 2 +- content/guides/models/track/log/log-tables.md | 4 +- content/guides/models/track/log/media.md | 12 +- content/guides/models/track/log/plots.md | 6 +- .../models/track/log/working-with-csv.md | 18 +- content/guides/models/track/project-page.md | 10 +- .../guides/models/track/public-api-guide.md | 8 +- content/guides/models/track/runs/_index.md | 22 +- content/guides/models/track/runs/alert.md | 2 +- .../guides/models/track/runs/filter-runs.md | 2 +- content/guides/models/track/runs/forking.md | 4 +- content/guides/models/track/runs/grouping.md | 2 +- content/guides/models/track/runs/resuming.md | 4 +- content/guides/models/track/runs/tags.md | 2 +- content/guides/models/track/workspaces.md | 4 +- content/guides/quickstart.md | 16 +- .../add-job-to-queue.md | 6 +- .../create-launch-job.md | 2 +- .../create-and-deploy-jobs/job-inputs.md | 6 +- content/launch/integration-guides/dagster.md | 22 +- .../launch/integration-guides/minikube_gpu.md | 2 +- content/launch/integration-guides/nim.md | 2 +- content/launch/integration-guides/volcano.md | 4 +- ...lelization_limit_resources_consumed_job.md | 2 +- .../launcherror_permission_denied.md | 2 +- .../restrict_access_modify_example.md | 2 +- content/launch/launch-terminology.md | 12 +- content/launch/set-up-launch/_index.md | 12 +- .../set-up-launch/setup-agent-advanced.md | 2 +- .../set-up-launch/setup-launch-docker.md | 4 +- .../set-up-launch/setup-launch-kubernetes.md | 4 +- .../set-up-launch/setup-launch-sagemaker.md | 2 +- content/launch/set-up-launch/setup-vertex.md | 2 +- content/launch/sweeps-on-launch.md | 10 +- content/launch/walkthrough.md | 10 +- content/ref/_index.md | 10 +- content/ref/python/_index.md | 22 +- content/ref/python/data-types/_index.md | 26 +- .../ref/python/integrations/keras/README.md | 8 +- .../ref/python/integrations/keras/_index.md | 8 +- content/ref/python/launch-library/README.md | 6 +- content/ref/python/launch-library/_index.md | 6 +- content/ref/python/public-api/_index.md | 22 +- content/ref/python/wandb_workspaces/_index.md | 4 +- content/ref/query-panel/_index.md | 48 ++-- content/ref/query-panel/artifact-type.md | 36 +-- content/ref/query-panel/artifact-version.md | 120 ++++----- content/ref/query-panel/artifact.md | 36 +-- content/ref/query-panel/entity.md | 24 +- content/ref/query-panel/float.md | 230 ++++++++-------- content/ref/query-panel/int.md | 230 ++++++++-------- content/ref/query-panel/number.md | 230 ++++++++-------- content/ref/query-panel/project.md | 88 +++---- content/ref/query-panel/run.md | 142 +++++----- content/ref/query-panel/string.md | 248 +++++++++--------- content/ref/query-panel/user.md | 12 +- content/support/_index.md | 8 +- content/support/access_artifacts.md | 2 +- .../support/admin_local_instance_manage.md | 2 +- content/support/best_log_models_runs_sweep.md | 2 +- ...ctices_organize_hyperparameter_searches.md | 2 +- content/support/deal_network_issues.md | 2 +- content/support/difference_wandbinit_modes.md | 2 +- content/support/group_runs_tags.md | 2 +- content/support/index_academic.md | 4 +- content/support/index_administrator.md | 44 ++-- content/support/index_alerts.md | 4 +- content/support/index_anonymous.md | 4 +- content/support/index_artifacts.md | 24 +- content/support/index_aws.md | 4 +- content/support/index_billing.md | 8 +- content/support/index_charts.md | 8 +- content/support/index_connectivity.md | 8 +- .../index_crashing and hanging runs.md | 8 +- .../support/index_environment variables.md | 18 +- content/support/index_experiments.md | 64 ++--- content/support/index_hyperparameter.md | 6 +- content/support/index_logs.md | 8 +- content/support/index_metrics.md | 18 +- content/support/index_notebooks.md | 6 +- content/support/index_outage.md | 4 +- content/support/index_privacy.md | 4 +- content/support/index_projects.md | 6 +- content/support/index_python.md | 12 +- content/support/index_reports.md | 30 +-- content/support/index_resuming.md | 2 +- content/support/index_runs.md | 26 +- content/support/index_security.md | 12 +- content/support/index_storage.md | 4 +- content/support/index_sweeps.md | 30 +-- content/support/index_tables.md | 4 +- content/support/index_team management.md | 22 +- content/support/index_tensorboard.md | 4 +- content/support/index_user management.md | 22 +- content/support/index_workspaces.md | 12 +- content/support/index_wysiwyg.md | 10 +- .../log_additional_metrics_run_completes.md | 4 +- ...multiprocessing_eg_distributed_training.md | 4 +- .../plot_multiple_lines_plot_legend.md | 2 +- ...matically_access_humanreadable_run_name.md | 2 +- content/support/rerun_grid_search.md | 2 +- .../retention_expiration_policy_artifact.md | 2 +- ...ate_crashed_ui_running_machine_get_data.md | 2 +- content/support/service_account_useful.md | 2 +- ...riting_terminal_jupyter_notebook_output.md | 2 +- content/support/sweeps_sagemaker.md | 2 +- content/support/team_find_more_information.md | 2 +- .../integration-tutorials/pytorch.md | 2 +- .../integration-tutorials/tensorflow.md | 2 +- .../tensorflow_sweeps.md | 2 +- content/tutorials/tables.md | 2 +- 235 files changed, 1565 insertions(+), 1565 deletions(-) diff --git a/content/guides/_index.md b/content/guides/_index.md index a5971cf0f..b3ccdb879 100644 --- a/content/guides/_index.md +++ b/content/guides/_index.md @@ -20,34 +20,34 @@ Weights & Biases (W&B) is the AI developer platform, with tools for training mod {{< img src="/images/general/architecture.png" alt="" >}} -W&B consists of three major components: [Models](/guides/models.md), [Weave](https://wandb.github.io/weave/), and [Core](/guides/core.md): +W&B consists of three major components: [Models](/guides/models/), [Weave](https://wandb.github.io/weave/), and [Core](/guides/core/): -**[W&B Models](/guides/models.md)** is a set of lightweight, interoperable tools for machine learning practitioners training and fine-tuning models. -- [Experiments](/guides/track/intro.md): Machine learning experiment tracking -- [Sweeps](/guides/sweeps/intro.md): Hyperparameter tuning and model optimization -- [Registry](/guides/registry/intro.md): Publish and share your ML models and datasets +**[W&B Models](/guides/models/)** is a set of lightweight, interoperable tools for machine learning practitioners training and fine-tuning models. +- [Experiments](/guides/track/intro/): Machine learning experiment tracking +- [Sweeps](/guides/sweeps/intro/): Hyperparameter tuning and model optimization +- [Registry](/guides/registry/intro/): Publish and share your ML models and datasets **[W&B Weave](https://wandb.github.io/weave/)** is a lightweight toolkit for tracking and evaluating LLM applications. -**[W&B Core](/guides/core.md)** is set of powerful building blocks for tracking and visualizing data and models, and communicating results. -- [Artifacts](/guides/artifacts/intro.md): Version assets and track lineage -- [Tables](/guides/tables/intro.md): Visualize and query tabular data -- [Reports](/guides/reports/intro.md): Document and collaborate on your discoveries +**[W&B Core](/guides/core/)** is set of powerful building blocks for tracking and visualizing data and models, and communicating results. +- [Artifacts](/guides/artifacts/intro/): Version assets and track lineage +- [Tables](/guides/tables/intro/): Visualize and query tabular data +- [Reports](/guides/reports/intro/): Document and collaborate on your discoveries ## How does W&B work? Read the following sections in this order if you are a first-time user of W&B and you are interested in training, tracking, and visualizing machine learning models and experiments: -1. Learn about [runs](./runs/intro.md), W&B's basic unit of computation. -2. Create and track machine learning experiments with [Experiments](./track/intro.md). -3. Discover W&B's flexible and lightweight building block for dataset and model versioning with [Artifacts](./artifacts/intro.md). -4. Automate hyperparameter search and explore the space of possible models with [Sweeps](./sweeps/intro.md). -5. Manage the model lifecycle from training to production with [Model Registry](./model_registry/intro.md). -6. Visualize predictions across model versions with our [Data Visualization](./tables/intro.md) guide. -7. Organize runs, embed and automate visualizations, describe your findings, and share updates with collaborators with [Reports](./reports/intro.md). +1. Learn about [runs](./runs/intro/), W&B's basic unit of computation. +2. Create and track machine learning experiments with [Experiments](./track/intro/). +3. Discover W&B's flexible and lightweight building block for dataset and model versioning with [Artifacts](./artifacts/intro/). +4. Automate hyperparameter search and explore the space of possible models with [Sweeps](./sweeps/intro/). +5. Manage the model lifecycle from training to production with [Model Registry](./model_registry/intro/). +6. Visualize predictions across model versions with our [Data Visualization](./tables/intro/) guide. +7. Organize runs, embed and automate visualizations, describe your findings, and share updates with collaborators with [Reports](./reports/intro/). ## Are you a first-time user of W&B? -Try the [quickstart](../quickstart.md) to learn how to install W&B and how to add W&B to your code. \ No newline at end of file +Try the [quickstart](../quickstart/) to learn how to install W&B and how to add W&B to your code. \ No newline at end of file diff --git a/content/guides/core/_index.md b/content/guides/core/_index.md index 97b731459..f29b6caeb 100644 --- a/content/guides/core/_index.md +++ b/content/guides/core/_index.md @@ -7,13 +7,13 @@ weight: 5 no_list: true --- -W&B Core is the foundational framework supporting [W&B Models](./models.md) and [W&B Weave](./weave_platform.md), and is itself supported by the [W&B Platform](./hosting/_index.md). +W&B Core is the foundational framework supporting [W&B Models](./models/) and [W&B Weave](./weave_platform/), and is itself supported by the [W&B Platform](./hosting/). {{< img src="/images/general/core.png" alt="" >}} W&B Core provides capabilities across the entire ML lifecycle. With W&B Core, you can: -- [Version and manage ML](./artifacts/_index.md) pipelines with full lineage tracing for easy auditing and reproducibility. -- Explore and evaluate data and metrics using [interactive, configurable visualizations](./tables/_index.md). -- [Document and share](./reports/_index_.md) insights across the entire organization by generating live reports in digestible, visual formats that are easily understood by non-technical stakeholders. -- [Query and create visualizations of your data](../guides/app/features/panels/query-panel/_index.md) that serve your custom needs. +- [Version and manage ML](./artifacts/) pipelines with full lineage tracing for easy auditing and reproducibility. +- Explore and evaluate data and metrics using [interactive, configurable visualizations](./tables/). +- [Document and share](./reports/) insights across the entire organization by generating live reports in digestible, visual formats that are easily understood by non-technical stakeholders. +- [Query and create visualizations of your data](../guides/app/features/panels/query-panel/) that serve your custom needs. diff --git a/content/guides/core/artifacts/_index.md b/content/guides/core/artifacts/_index.md index 3ce6e140e..f57931856 100644 --- a/content/guides/core/artifacts/_index.md +++ b/content/guides/core/artifacts/_index.md @@ -15,10 +15,10 @@ weight: 1 {{< cta-button productLink="https://wandb.ai/wandb/arttest/artifacts/model/iv3_trained/5334ab69740f9dda4fed/lineage" colabLink="https://colab.research.google.com/github/wandb/examples/blob/master/colabs/wandb-artifacts/Artifact_fundamentals.ipynb" >}} -Use W&B Artifacts to track and version data as the inputs and outputs of your [W&B Runs](../runs/intro.md). For example, a model training run might take in a dataset as input and produce a trained model as output. You can log hyperparameters, metadatra, and metrics to a run, and you can use an artifact to log, track, and version the dataset used to train the model as input and another artifact for the resulting model checkpoints as output. +Use W&B Artifacts to track and version data as the inputs and outputs of your [W&B Runs](../runs/intro/). For example, a model training run might take in a dataset as input and produce a trained model as output. You can log hyperparameters, metadatra, and metrics to a run, and you can use an artifact to log, track, and version the dataset used to train the model as input and another artifact for the resulting model checkpoints as output. ## Use cases -You can use artifacts throughout your entire ML workflow as inputs and outputs of [runs](../runs/intro.md). You can use datasets, models, or even other artifacts as inputs for processing. +You can use artifacts throughout your entire ML workflow as inputs and outputs of [runs](../runs/intro/). You can use datasets, models, or even other artifacts as inputs for processing. {{< img src="/images/artifacts/artifacts_landing_page2.png" >}} @@ -26,7 +26,7 @@ You can use artifacts throughout your entire ML workflow as inputs and outputs o |------------------------|-----------------------------|------------------------------| | Model Training | Dataset (training and validation data) | Trained Model | | Dataset Pre-Processing | Dataset (raw data) | Dataset (pre-processed data) | -| Model Evaluation | Model + Dataset (test data) | [W&B Table](../tables/intro.md) | +| Model Evaluation | Model + Dataset (test data) | [W&B Table](../tables/intro/) | | Model Optimization | Model | Optimized Model | @@ -37,8 +37,8 @@ The proceeding code snippets are meant to be run in order. ## Create an artifact Create an artifact with four lines of code: -1. Create a [W&B run](../runs/intro.md). -2. Create an artifact object with the [`wandb.Artifact`](../../ref/python/artifact.md) API. +1. Create a [W&B run](../runs/intro/). +2. Create an artifact object with the [`wandb.Artifact`](../../ref/python/artifact/) API. 3. Add one or more files, such as a model file or dataset, to your artifact object. 4. Log your artifact to W&B. @@ -56,7 +56,7 @@ artifact.save() ``` {{% alert %}} -See the [track external files](./track-external-files.md) page for information on how to add references to files or directories stored in external object storage, like an Amazon S3 bucket. +See the [track external files](./track-external-files/) page for information on how to add references to files or directories stored in external object storage, like an Amazon S3 bucket. {{% /alert %}} ## Download an artifact @@ -76,12 +76,12 @@ datadir = artifact.download() #downloads the full "my_data" artifact to the defa ``` {{% alert %}} -You can pass a custom path into the `root` [parameter](../../ref/python/artifact.md) to download an artifact to a specific directory. For alternate ways to download artifacts and to see additional parameters, see the guide on [downloading and using artifacts](./download-and-use-an-artifact.md). +You can pass a custom path into the `root` [parameter](../../ref/python/artifact/) to download an artifact to a specific directory. For alternate ways to download artifacts and to see additional parameters, see the guide on [downloading and using artifacts](./download-and-use-an-artifact/). {{% /alert %}} ## Next steps -* Learn how to [version](./create-a-new-artifact-version.md), [update](./update-an-artifact.md), or [delete](./delete-artifacts.md) artifacts. -* Learn how to trigger downstream workflows in response to changes to your artifacts with [artifact automation](./project-scoped-automations.md). -* Learn about the [model registry](../model_registry/intro.md), a space that houses trained models. -* Explore the [Python SDK](../../ref/python/artifact.md) and [CLI](../../ref/cli/wandb-artifact/README.md) reference guides. \ No newline at end of file +* Learn how to [version](./create-a-new-artifact-version/), [update](./update-an-artifact/), or [delete](./delete-artifacts/) artifacts. +* Learn how to trigger downstream workflows in response to changes to your artifacts with [artifact automation](./project-scoped-automations/). +* Learn about the [model registry](../model_registry/intro/), a space that houses trained models. +* Explore the [Python SDK](../../ref/python/artifact/) and [CLI](../../ref/cli/wandb-artifact/README/) reference guides. \ No newline at end of file diff --git a/content/guides/core/artifacts/artifacts-walkthrough.md b/content/guides/core/artifacts/artifacts-walkthrough.md index 79dd261e7..fad453afb 100644 --- a/content/guides/core/artifacts/artifacts-walkthrough.md +++ b/content/guides/core/artifacts/artifacts-walkthrough.md @@ -5,7 +5,7 @@ description: >- displayed_sidebar: default title: "Tutorial: Create, track, and use a dataset artifact" --- -This walkthrough demonstrates how to create, track, and use a dataset artifact from [W&B Runs](../runs/intro.md). +This walkthrough demonstrates how to create, track, and use a dataset artifact from [W&B Runs](../runs/intro/). ## 1. Log into W&B @@ -19,7 +19,7 @@ wandb.login() ## 2. Initialize a run -Use the [`wandb.init()`](../../ref/python/init.md) API to generate a background process to sync and log data as a W&B Run. Provide a project name and a job type: +Use the [`wandb.init()`](../../ref/python/init/) API to generate a background process to sync and log data as a W&B Run. Provide a project name and a job type: ```python # Create a W&B Run. Here we specify 'dataset' as the job type since this example @@ -29,7 +29,7 @@ run = wandb.init(project="artifacts-example", job_type="upload-dataset") ## 3. Create an artifact object -Create an artifact object with the [`wandb.Artifact()`](../../ref/python/artifact.md) API. Provide a name for the artifact and a description of the file type for the `name` and `type` parameters, respectively. +Create an artifact object with the [`wandb.Artifact()`](../../ref/python/artifact/) API. Provide a name for the artifact and a description of the file type for the `name` and `type` parameters, respectively. For example, the following code snippet demonstrates how to create an artifact called `‘bicycle-dataset’` with a `‘dataset’` label: @@ -37,7 +37,7 @@ For example, the following code snippet demonstrates how to create an artifact c artifact = wandb.Artifact(name="bicycle-dataset", type="dataset") ``` -For more information about how to construct an artifact, see [Construct artifacts](./construct-an-artifact.md). +For more information about how to construct an artifact, see [Construct artifacts](./construct-an-artifact/). ## Add the dataset to the artifact @@ -60,7 +60,7 @@ Use the W&B run objects `log_artifact()` method to both save your artifact versi run.log_artifact(artifact) ``` -A `'latest'` alias is created by default when you log an artifact. For more information about artifact aliases and versions, see [Create a custom alias](./create-a-custom-alias.md) and [Create new artifact versions](./create-a-new-artifact-version.md), respectively. +A `'latest'` alias is created by default when you log an artifact. For more information about artifact aliases and versions, see [Create a custom alias](./create-a-custom-alias/) and [Create new artifact versions](./create-a-new-artifact-version/), respectively. ## 5. Download and use the artifact @@ -82,4 +82,4 @@ artifact = run.use_artifact("bicycle-dataset:latest") artifact_dir = artifact.download() ``` -Alternatively, you can use the Public API (`wandb.Api`) to export (or update data) data already saved in a W&B outside of a Run. See [Track external files](./track-external-files.md) for more information. +Alternatively, you can use the Public API (`wandb.Api`) to export (or update data) data already saved in a W&B outside of a Run. See [Track external files](./track-external-files/) for more information. diff --git a/content/guides/core/artifacts/construct-an-artifact.md b/content/guides/core/artifacts/construct-an-artifact.md index 21213f29b..2197d63c7 100644 --- a/content/guides/core/artifacts/construct-an-artifact.md +++ b/content/guides/core/artifacts/construct-an-artifact.md @@ -9,17 +9,17 @@ title: Create an artifact weight: 2 --- -Use the W&B Python SDK to construct artifacts from [W&B Runs](../../ref/python/run.md). You can add [files, directories, URIs, and files from parallel runs to artifacts](#add-files-to-an-artifact). After you add a file to an artifact, save the artifact to the W&B Server or [your own private server](../hosting/hosting-options/self-managed.md). +Use the W&B Python SDK to construct artifacts from [W&B Runs](../../ref/python/run/). You can add [files, directories, URIs, and files from parallel runs to artifacts](#add-files-to-an-artifact). After you add a file to an artifact, save the artifact to the W&B Server or [your own private server](../hosting/hosting-options/self-managed/). -For information on how to track external files, such as files stored in Amazon S3, see the [Track external files](./track-external-files.md) page. +For information on how to track external files, such as files stored in Amazon S3, see the [Track external files](./track-external-files/) page. ## How to construct an artifact -Construct a [W&B Artifact](../../ref/python/artifact.md) in three steps: +Construct a [W&B Artifact](../../ref/python/artifact/) in three steps: ### 1. Create an artifact Python object with `wandb.Artifact()` -Initialize the [`wandb.Artifact()`](../../ref/python/artifact.md) class to create an artifact object. Specify the following parameters: +Initialize the [`wandb.Artifact()`](../../ref/python/artifact/) class to create an artifact object. Specify the following parameters: * **Name**: Specify a name for your artifact. The name should be unique, descriptive, and easy to remember. Use an artifacts name to both: identify the artifact in the W&B App UI and when you want to use that artifact. * **Type**: Provide a type. The type should be simple, descriptive and correspond to a single step of your machine learning pipeline. Common artifact types include `'dataset'` or `'model'`. @@ -28,7 +28,7 @@ Initialize the [`wandb.Artifact()`](../../ref/python/artifact.md) class to creat {{% alert %}} The "name" and "type" you provide is used to create a directed acyclic graph. This means you can view the lineage of an artifact on the W&B App. -See the [Explore and traverse artifact graphs](./explore-and-traverse-an-artifact-graph.md) for more information. +See the [Explore and traverse artifact graphs](./explore-and-traverse-an-artifact-graph/) for more information. {{% /alert %}} @@ -36,7 +36,7 @@ See the [Explore and traverse artifact graphs](./explore-and-traverse-an-artifac Artifacts can not have the same name, even if you specify a different type for the types parameter. In other words, you can not create an artifact named `cats` of type `dataset` and another artifact with the same name of type `model`. {{% /alert %}} -You can optionally provide a description and metadata when you initialize an artifact object. For more information on available attributes and parameters, see [`wandb.Artifact`](../../ref/python/artifact.md) Class definition in the Python SDK Reference Guide. +You can optionally provide a description and metadata when you initialize an artifact object. For more information on available attributes and parameters, see [`wandb.Artifact`](../../ref/python/artifact/) Class definition in the Python SDK Reference Guide. The proceeding example demonstrates how to create a dataset artifact: @@ -56,7 +56,7 @@ Add files, directories, external URI references (such as Amazon S3) and more wit artifact.add_file(local_path="hello_world.txt", name="optional-name") ``` -You can also add multiple files with the [`add_dir`](../../ref/python/artifact.md#add_dir) method. For more information on how to add files, see [Update an artifact](./update-an-artifact.md). +You can also add multiple files with the [`add_dir`](../../ref/python/artifact.md#add_dir) method. For more information on how to add files, see [Update an artifact](./update-an-artifact/). ### 3. Save your artifact to the W&B server @@ -69,7 +69,7 @@ run = wandb.init(project="artifacts-example", job_type="job-type") run.log_artifact(artifact) ``` -You can optionally construct an artifact outside of a W&B run. For more information, see [Track external files](./track-external-files.md). +You can optionally construct an artifact outside of a W&B run. For more information, see [Track external files](./track-external-files/). {{% alert color="secondary" %}} Calls to `log_artifact` are performed asynchronously for performant uploads. This can cause surprising behavior when logging artifacts in a loop. For example: diff --git a/content/guides/core/artifacts/create-a-new-artifact-version.md b/content/guides/core/artifacts/create-a-new-artifact-version.md index 241577805..6e1799938 100644 --- a/content/guides/core/artifacts/create-a-new-artifact-version.md +++ b/content/guides/core/artifacts/create-a-new-artifact-version.md @@ -9,7 +9,7 @@ title: Create an artifact version weight: 6 --- -Create a new artifact version with a single [run](../runs/intro.md) or collaboratively with distributed runs. You can optionally create a new artifact version from a previous version known as an [incremental artifact](#create-a-new-artifact-version-from-an-existing-version). +Create a new artifact version with a single [run](../runs/intro/) or collaboratively with distributed runs. You can optionally create a new artifact version from a previous version known as an [incremental artifact](#create-a-new-artifact-version-from-an-existing-version). {{% alert %}} We recommend that you create an incremental artifact when you need to apply changes to a subset of files in an artifact, where the size of the original artifact is significantly larger. diff --git a/content/guides/core/artifacts/download-and-use-an-artifact.md b/content/guides/core/artifacts/download-and-use-an-artifact.md index 435ea6089..1fb097fec 100644 --- a/content/guides/core/artifacts/download-and-use-an-artifact.md +++ b/content/guides/core/artifacts/download-and-use-an-artifact.md @@ -17,11 +17,11 @@ Team members with view-only seats cannot download artifacts. ### Download and use an artifact stored on W&B -Download and use an artifact stored in W&B either inside or outside of a W&B Run. Use the Public API ([`wandb.Api`](../../ref/python/public-api/api.md)) to export (or update data) already saved in W&B. For more information, see the W&B [Public API Reference guide](../../ref/python/public-api/README.md). +Download and use an artifact stored in W&B either inside or outside of a W&B Run. Use the Public API ([`wandb.Api`](../../ref/python/public-api/api/)) to export (or update data) already saved in W&B. For more information, see the W&B [Public API Reference guide](../../ref/python/public-api/README/). {{< tabpane text=true >}} {{% tab header="During a run" %}} -First, import the W&B Python SDK. Next, create a W&B [Run](../../ref/python/run.md): +First, import the W&B Python SDK. Next, create a W&B [Run](../../ref/python/run/): ```python import wandb @@ -54,7 +54,7 @@ This fetches only the file at the path `name`. It returns an `Entry` object with * `Entry.download`: Downloads file from the artifact at path `name` * `Entry.ref`: If `add_reference` stored the entry as a reference, returns the URI -References that have schemes that W&B knows how to handle get downloaded just like artifact files. For more information, see [Track external files](../../guides/artifacts/track-external-files.md). +References that have schemes that W&B knows how to handle get downloaded just like artifact files. For more information, see [Track external files](../../guides/artifacts/track-external-files/). {{% /tab %}} {{% tab header="Outside of a run" %}} First, import the W&B SDK. Next, create an artifact from the Public API Class. Provide the entity, project, artifact, and alias associated with that artifact: @@ -130,4 +130,4 @@ artifact.add_file("model.h5") run.use_artifact(artifact) ``` -For more information about constructing an artifact, see [Construct an artifact](../../guides/artifacts/construct-an-artifact.md). \ No newline at end of file +For more information about constructing an artifact, see [Construct an artifact](../../guides/artifacts/construct-an-artifact/). \ No newline at end of file diff --git a/content/guides/core/artifacts/explore-and-traverse-an-artifact-graph.md b/content/guides/core/artifacts/explore-and-traverse-an-artifact-graph.md index 65c6f9a37..98cf7b15e 100644 --- a/content/guides/core/artifacts/explore-and-traverse-an-artifact-graph.md +++ b/content/guides/core/artifacts/explore-and-traverse-an-artifact-graph.md @@ -57,7 +57,7 @@ Clicking on a node opens a preview with an overview of the node. Clicking on the {{< img src="/images/artifacts/lineage3b.gif" alt="Searching a run cluster" >}} ## Use the API to track lineage -You can also navigate a graph using the [W&B API](../../ref/python/public-api/api.md). +You can also navigate a graph using the [W&B API](../../ref/python/public-api/api/). Create an artifact. First, create a run with `wandb.init`. Then,create a new artifact or retrieve an existing one with `wandb.Artifact`. Next, add files to the artifact with `.add_file`. Finally, log the artifact to the run with `.log_artifact`. The finished code looks something like this: @@ -79,6 +79,6 @@ producer_run = artifact.logged_by() consumer_runs = artifact.used_by() ``` ## Next steps -- [Explore artifacts in more detail](../artifacts/artifacts-walkthrough.md) -- [Manage artifact storage](../artifacts/delete-artifacts.md) +- [Explore artifacts in more detail](../artifacts/artifacts-walkthrough/) +- [Manage artifact storage](../artifacts/delete-artifacts/) - [Explore an artifacts project](https://wandb.ai/wandb-smle/artifact_workflow/artifacts/raw_dataset/raw_data/v0/lineage) \ No newline at end of file diff --git a/content/guides/core/artifacts/manage-data/delete-artifacts.md b/content/guides/core/artifacts/manage-data/delete-artifacts.md index 7bb1071f4..8ee95d998 100644 --- a/content/guides/core/artifacts/manage-data/delete-artifacts.md +++ b/content/guides/core/artifacts/manage-data/delete-artifacts.md @@ -12,7 +12,7 @@ Delete artifacts interactively with the App UI or programmatically with the W&B The contents of the artifact remain as a soft-delete, or pending deletion state, until a regularly run garbage collection process reviews all artifacts marked for deletion. The garbage collection process deletes associated files from storage if the artifact and its associated files are not used by a previous or subsequent artifact versions. -The sections in this page describe how to delete specific artifact versions, how to delete an artifact collection, how to delete artifacts with and without aliases, and more. You can schedule when artifacts are deleted from W&B with TTL policies. For more information, see [Manage data retention with Artifact TTL policy](./ttl.md). +The sections in this page describe how to delete specific artifact versions, how to delete an artifact collection, how to delete artifacts with and without aliases, and more. You can schedule when artifacts are deleted from W&B with TTL policies. For more information, see [Manage data retention with Artifact TTL policy](./ttl/). {{% alert %}} Artifacts that are scheduled for deletion with a TTL policy, deleted with the W&B SDK, or deleted with the W&B App UI are first soft-deleted. Artifacts that are soft deleted undergo garbage collection before they are hard-deleted. @@ -129,9 +129,9 @@ The `X` indicates you must satisfy the requirement: | | Environment variable | Enable versioning | | -----------------------------------------------| ------------------------| ----------------- | | Shared cloud | | | -| Shared cloud with [secure storage connector](../hosting/data-security/secure-storage-connector.md)| | X | +| Shared cloud with [secure storage connector](../hosting/data-security/secure-storage-connector/)| | X | | Dedicated cloud | | | -| Dedicated cloud with [secure storage connector](../hosting/data-security/secure-storage-connector.md)| | X | +| Dedicated cloud with [secure storage connector](../hosting/data-security/secure-storage-connector/)| | X | | Customer-managed cloud | X | X | | Customer managed on-prem | X | X | diff --git a/content/guides/core/artifacts/manage-data/storage.md b/content/guides/core/artifacts/manage-data/storage.md index 107c27b01..5e139a120 100644 --- a/content/guides/core/artifacts/manage-data/storage.md +++ b/content/guides/core/artifacts/manage-data/storage.md @@ -9,7 +9,7 @@ title: Manage artifact storage and memory allocation W&B stores artifact files in a private Google Cloud Storage bucket located in the United States by default. All files are encrypted at rest and in transit. -For sensitive files, we recommend you set up [Private Hosting](../hosting/intro.md) or use [reference artifacts](./track-external-files.md). +For sensitive files, we recommend you set up [Private Hosting](../hosting/intro/) or use [reference artifacts](./track-external-files/). During training, W&B locally saves logs, artifacts, and configuration files in the following local directories: @@ -26,7 +26,7 @@ Depending on the machine on `wandb` is initialized on, these default folders may ### Clean up local artifact cache -W&B caches artifact files to speed up downloads across versions that share files in common. Over time this cache directory can become large. Run the [`wandb artifact cache cleanup`](../../ref/cli/wandb-artifact/wandb-artifact-cache/README.md) command to prune the cache and to remove any files that have not been used recently. +W&B caches artifact files to speed up downloads across versions that share files in common. Over time this cache directory can become large. Run the [`wandb artifact cache cleanup`](../../ref/cli/wandb-artifact/wandb-artifact-cache/README/) command to prune the cache and to remove any files that have not been used recently. The proceeding code snippet demonstrates how to limit the size of the cache to 1GB. Copy and paste the code snippet into your terminal: diff --git a/content/guides/core/artifacts/manage-data/ttl.md b/content/guides/core/artifacts/manage-data/ttl.md index d06f4f568..cbaec82e0 100644 --- a/content/guides/core/artifacts/manage-data/ttl.md +++ b/content/guides/core/artifacts/manage-data/ttl.md @@ -10,7 +10,7 @@ title: Manage artifact data retention {{< cta-button colabLink="https://colab.research.google.com/github/wandb/examples/blob/kas-artifacts-ttl-colab/colabs/wandb-artifacts/WandB_Artifacts_Time_to_live_TTL_Walkthrough.ipynb" >}} -Schedule when artifacts are deleted from W&B with W&B Artifact time-to-live (TTL) policy. When you delete an artifact, W&B marks that artifact as a *soft-delete*. In other words, the artifact is marked for deletion but files are not immediately deleted from storage. For more information on how W&B deletes artifacts, see the [Delete artifacts](./delete-artifacts.md) page. +Schedule when artifacts are deleted from W&B with W&B Artifact time-to-live (TTL) policy. When you delete an artifact, W&B marks that artifact as a *soft-delete*. In other words, the artifact is marked for deletion but files are not immediately deleted from storage. For more information on how W&B deletes artifacts, see the [Delete artifacts](./delete-artifacts/) page. Check out [this](https://www.youtube.com/watch?v=hQ9J6BoVmnc) video tutorial to learn how to manage data retention with Artifacts TTL in the W&B App. @@ -18,7 +18,7 @@ Check out [this](https://www.youtube.com/watch?v=hQ9J6BoVmnc) video tutorial to W&B deactivates the option to set a TTL policy for model artifacts linked to the Model Registry. This is to help ensure that linked models do not accidentally expire if used in production workflows. {{% /alert %}} {{% alert %}} -* Only team admins can view a [team's settings](../app/settings-page/team-settings.md) and access team level TTL settings such as (1) permitting who can set or edit a TTL policy or (2) setting a team default TTL. +* Only team admins can view a [team's settings](../app/settings-page/team-settings/) and access team level TTL settings such as (1) permitting who can set or edit a TTL policy or (2) setting a team default TTL. * If you do not see the option to set or edit a TTL policy in an artifact's details in the W&B App UI or if setting a TTL programmatically does not successfully change an artifact's TTL property, your team admin has not given you permissions to do so. {{% /alert %}} @@ -31,7 +31,7 @@ The following Artifact types indicate an auto-generated Artifact: - `job` - Any Artifact type starting with: `wandb-*` -You can check an Artifact's type on the [W&B platform](../artifacts/explore-and-traverse-an-artifact-graph.md) or programmatically: +You can check an Artifact's type on the [W&B platform](../artifacts/explore-and-traverse-an-artifact-graph/) or programmatically: ```python import wandb @@ -68,12 +68,12 @@ For all the code snippets below, replace the content wrapped in `<>` with your i Use the W&B Python SDK to define a TTL policy when you create an artifact. TTL policies are typically defined in days. {{% alert %}} -Defining a TTL policy when you create an artifact is similar to how you normally [create an artifact](./construct-an-artifact.md). With the exception that you pass in a time delta to the artifact's `ttl` attribute. +Defining a TTL policy when you create an artifact is similar to how you normally [create an artifact](./construct-an-artifact/). With the exception that you pass in a time delta to the artifact's `ttl` attribute. {{% /alert %}} The steps are as follows: -1. [Create an artifact](./construct-an-artifact.md). +1. [Create an artifact](./construct-an-artifact/). 2. [Add content to the artifact](./construct-an-artifact.md#add-files-to-an-artifact) such as files, a directory, or a reference. 3. Define a TTL time limit with the [`datetime.timedelta`](https://docs.python.org/3/library/datetime.html) data type that is part of Python's standard library. 4. [Log the artifact](./construct-an-artifact.md#3-save-your-artifact-to-the-wb-server). @@ -103,7 +103,7 @@ When you modify an artifact's TTL, the time the artifact takes to expire is stil {{< tabpane text=true >}} {{% tab header="Python SDK" %}} -1. [Fetch your artifact](./download-and-use-an-artifact.md). +1. [Fetch your artifact](./download-and-use-an-artifact/). 2. Pass in a time delta to the artifact's `ttl` attribute. 3. Update the artifact with the [`save`](../../ref/python/run.md#save) method. @@ -180,7 +180,7 @@ Artifacts with TTL turned off will not inherit an artifact collection's TTL. Ref {{< tabpane text=true >}} {{% tab header="Python SDK" %}} -1. [Fetch your artifact](./download-and-use-an-artifact.md). +1. [Fetch your artifact](./download-and-use-an-artifact/). 2. Set the artifact's `ttl` attribute to `None`. 3. Update the artifact with the [`save`](../../ref/python/run.md#save) method. diff --git a/content/guides/core/artifacts/update-an-artifact.md b/content/guides/core/artifacts/update-an-artifact.md index fb5e6240a..792a4b486 100644 --- a/content/guides/core/artifacts/update-an-artifact.md +++ b/content/guides/core/artifacts/update-an-artifact.md @@ -10,7 +10,7 @@ weight: 4 Pass desired values to update the `description`, `metadata`, and `alias` of an artifact. Call the `save()` method to update the artifact on the W&B servers. You can update an artifact during a W&B Run or outside of a Run. -Use the W&B Public API ([`wandb.Api`](../../ref/python/public-api/api.md)) to update an artifact outside of a run. Use the Artifact API ([`wandb.Artifact`](../../ref/python/artifact.md)) to update an artifact during a run. +Use the W&B Public API ([`wandb.Api`](../../ref/python/public-api/api/)) to update an artifact outside of a run. Use the Artifact API ([`wandb.Artifact`](../../ref/python/artifact/)) to update an artifact during a run. {{% alert color="secondary" %}} You can not update the alias of artifact linked to a model in Model Registry. @@ -19,7 +19,7 @@ You can not update the alias of artifact linked to a model in Model Registry. {{< tabpane text=true >}} {{% tab header="During a run" %}} -The proceeding code example demonstrates how to update the description of an artifact using the [`wandb.Artifact`](../../ref/python/artifact.md) API: +The proceeding code example demonstrates how to update the description of an artifact using the [`wandb.Artifact`](../../ref/python/artifact/) API: ```python import wandb @@ -62,7 +62,7 @@ artifact.aliases = ["replaced"] artifact.save() ``` -For more information, see the Weights and Biases [Artifact API](../../ref/python/artifact.md). +For more information, see the Weights and Biases [Artifact API](../../ref/python/artifact/). {{% /tab %}} {{% tab header="With collections" %}} You can also update an Artifact collection in the same way as a singular artifact: @@ -76,7 +76,7 @@ artifact.name = "" artifact.description = "" artifact.save() ``` -For more information, see the [Artifacts Collection](../../ref/python/public-api/api.md) reference. +For more information, see the [Artifacts Collection](../../ref/python/public-api/api/) reference. {{% /tab %}} {{% /tabpane %}} diff --git a/content/guides/core/reports/_index.md b/content/guides/core/reports/_index.md index b1fe9d798..e90ed2746 100644 --- a/content/guides/core/reports/_index.md +++ b/content/guides/core/reports/_index.md @@ -42,12 +42,12 @@ Create a collaborative report with a few clicks. 6. Click **Publish to project**. 7. Click the **Share** button to share your report with collaborators. -See the [Create a report](./create-a-report.md) page for more information on how to create reports interactively an programmatically with the W&B Python SDK. +See the [Create a report](./create-a-report/) page for more information on how to create reports interactively an programmatically with the W&B Python SDK. ## How to get started Depending on your use case, explore the following resources to get started with W&B Reports: * Check out our [video demonstration](https://www.youtube.com/watch?v=2xeJIv_K_eI) to get an overview of W&B Reports. -* Explore the [Reports gallery](./reports-gallery.md) for examples of live reports. -* Try the [Programmatic Workspaces](../../tutorials/workspaces.md) tutorial to learn how to create and customize your workspace. +* Explore the [Reports gallery](./reports-gallery/) for examples of live reports. +* Try the [Programmatic Workspaces](../../tutorials/workspaces/) tutorial to learn how to create and customize your workspace. * Read curated Reports in [W&B Fully Connected](http://wandb.me/fc). \ No newline at end of file diff --git a/content/guides/core/reports/cross-project-reports.md b/content/guides/core/reports/cross-project-reports.md index 6d866ca35..fc395c7fa 100644 --- a/content/guides/core/reports/cross-project-reports.md +++ b/content/guides/core/reports/cross-project-reports.md @@ -24,7 +24,7 @@ Share a view-only link to a report that is in a private project or team project. {{< img src="/images/reports/magic-links.gif" alt="" >}} -View-only report links add a secret access token to the URL, so anyone who opens the link can view the page. Anyone can use the magic link to view the report without logging in first. For customers on [W&B Local](../hosting/intro.md) private cloud installations, these links remain behind your firewall, so only members of your team with access to your private instance _and_ access to the view-only link can view the report. +View-only report links add a secret access token to the URL, so anyone who opens the link can view the page. Anyone can use the magic link to view the report without logging in first. For customers on [W&B Local](../hosting/intro/) private cloud installations, these links remain behind your firewall, so only members of your team with access to your private instance _and_ access to the view-only link can view the report. In **view-only mode**, someone who is not logged in can see the charts and mouse over to see tooltips of values, zoom in and out on charts, and scroll through columns in the table. When in view mode, they cannot create new charts or new table queries to explore the data. View-only visitors to the report link won't be able to click a run to get to the run page. Also, the view-only visitors would not be able to see the share modal but instead would see a tooltip on hover which says: `Sharing not available for view only access`. diff --git a/content/guides/core/reports/edit-a-report.md b/content/guides/core/reports/edit-a-report.md index 1bad1e8f4..5f95089dd 100644 --- a/content/guides/core/reports/edit-a-report.md +++ b/content/guides/core/reports/edit-a-report.md @@ -17,7 +17,7 @@ _Panel grids_ are a specific type of block that hold panels and _run sets_. Run {{% alert %}} -Check out the [Programmatic workspaces tutorial](../../tutorials/workspaces.md) for a step by step example on how create and customize a saved workspace view. +Check out the [Programmatic workspaces tutorial](../../tutorials/workspaces/) for a step by step example on how create and customize a saved workspace view. {{% /alert %}} {{% alert %}} diff --git a/content/guides/core/reports/reports-gallery.md b/content/guides/core/reports/reports-gallery.md index 4ef109a9d..f99be2cb6 100644 --- a/content/guides/core/reports/reports-gallery.md +++ b/content/guides/core/reports/reports-gallery.md @@ -52,6 +52,6 @@ Tell the story of a project, which you and others can reference later to underst See the [Learning Dexterity End-to-End Using W&B Reports](https://bit.ly/wandb-learning-dexterity) for an example of how W&B Reports were used to explore how the OpenAI Robotics team used W&B Reports to run massive machine learning projects. - + \ No newline at end of file diff --git a/content/guides/core/tables/_index.md b/content/guides/core/tables/_index.md index 06e701ffe..0f3f8487c 100644 --- a/content/guides/core/tables/_index.md +++ b/content/guides/core/tables/_index.md @@ -32,11 +32,11 @@ A Table is a two-dimensional grid of data where each column has a single type of Log a table with a few lines of code: -- [`wandb.init()`](../../ref/python/init.md): Create a [run](../runs/intro.md) to track results. -- [`wandb.Table()`](../../ref/python/data-types/table.md): Create a new table object. +- [`wandb.init()`](../../ref/python/init/): Create a [run](../runs/intro/) to track results. +- [`wandb.Table()`](../../ref/python/data-types/table/): Create a new table object. - `columns`: Set the column names. - `data`: Set the contents of the table. -- [`run.log()`](../../ref/python/log.md): Log the table to save it to W&B. +- [`run.log()`](../../ref/python/log/): Log the table to save it to W&B. ```python showLineNumbers import wandb @@ -47,5 +47,5 @@ run.log({"Table Name": my_table}) ``` ## How to get started -* [Quickstart](./tables-walkthrough.md): Learn to log data tables, visualize data, and query data. -* [Tables Gallery](./tables-gallery.md): See example use cases for Tables. \ No newline at end of file +* [Quickstart](./tables-walkthrough/): Learn to log data tables, visualize data, and query data. +* [Tables Gallery](./tables-gallery/): See example use cases for Tables. \ No newline at end of file diff --git a/content/guides/core/tables/tables-download.md b/content/guides/core/tables/tables-download.md index 1096f9720..24b140575 100644 --- a/content/guides/core/tables/tables-download.md +++ b/content/guides/core/tables/tables-download.md @@ -45,6 +45,6 @@ df.to_csv("example.csv", encoding="utf-8") ``` # Next Steps -- Check out the [reference documentation](../artifacts/construct-an-artifact.md) on `artifacts`. -- Go through our [Tables Walktrough](../tables/tables-walkthrough.md) guide. +- Check out the [reference documentation](../artifacts/construct-an-artifact/) on `artifacts`. +- Go through our [Tables Walktrough](../tables/tables-walkthrough/) guide. - Check out the [Dataframe](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) reference docs. \ No newline at end of file diff --git a/content/guides/core/tables/tables-walkthrough.md b/content/guides/core/tables/tables-walkthrough.md index 59483e54b..44bd84595 100644 --- a/content/guides/core/tables/tables-walkthrough.md +++ b/content/guides/core/tables/tables-walkthrough.md @@ -18,11 +18,11 @@ Log a table with W&B. You can either construct a new table or pass a Pandas Data {{< tabpane text=true >}} {{% tab header="Construct a table" value="construct" %}} To construct and log a new Table, you will use: -- [`wandb.init()`](../../ref/python/init.md): Create a [run](../runs/intro.md) to track results. -- [`wandb.Table()`](../../ref/python/data-types/table.md): Create a new table object. +- [`wandb.init()`](../../ref/python/init/): Create a [run](../runs/intro/) to track results. +- [`wandb.Table()`](../../ref/python/data-types/table/): Create a new table object. - `columns`: Set the column names. - `data`: Set the contents of each row. -- [`run.log()`](../../ref/python/log.md): Log the table to save it to W&B. +- [`run.log()`](../../ref/python/log/): Log the table to save it to W&B. Here's an example: ```python @@ -49,7 +49,7 @@ my_table = wandb.Table(dataframe=df) wandb.log({"Table Name": my_table}) ``` -For more information on supported data types, see the [`wandb.Table`](../../ref/python/data-types/table.md) in the W&B API Reference Guide. +For more information on supported data types, see the [`wandb.Table`](../../ref/python/data-types/table/) in the W&B API Reference Guide. {{% /tab %}} {{< /tabpane >}} diff --git a/content/guides/hosting/_index.md b/content/guides/hosting/_index.md index 09b197db1..5ddcab60c 100644 --- a/content/guides/hosting/_index.md +++ b/content/guides/hosting/_index.md @@ -6,7 +6,7 @@ title: W&B Platform weight: 6 no_list: true --- -W&B Platform is the foundational infrastructure, tooling and governance scaffolding which supports the W&B products like [Core](../core.md), [Models](../models.md) and [Weave](../weave_platform.md). +W&B Platform is the foundational infrastructure, tooling and governance scaffolding which supports the W&B products like [Core](../core/), [Models](../models/) and [Weave](../weave_platform/). W&B Platform is available in three different deployment options: @@ -24,17 +24,17 @@ The following sections provide an overview of each deployment type. ### W&B Multi-tenant Cloud W&B Multi-tenant Cloud is a fully managed service deployed in W&B's cloud infrastructure, where you can seamlessly access the W&B products at the desired scale, with cost-efficient options for pricing, and with continuous updates for the latest features and functionalities. W&B recommends to use the Multi-tenant Cloud for your product trial, or to manage your production AI workflows if you do not need the security of a private deployment, self-service onboarding is important, and cost efficiency is critical. -See [W&B Multi-tenant Cloud](./hosting-options/saas_cloud.md) for more information. +See [W&B Multi-tenant Cloud](./hosting-options/saas_cloud/) for more information. ### W&B Dedicated Cloud W&B Dedicated Cloud is a single-tenant, fully managed service deployed in W&B's cloud infrastructure. It is the best place to onboard W&B if your organization requires conformance to strict governance controls including data residency, have need of advanced security capabilities, and are looking to optimize their AI operating costs by not having to build & manage the required infrastructure with security, scale & performance characteristics. -See [W&B Dedicated Cloud](./hosting-options/dedicated_cloud.md) for more information. +See [W&B Dedicated Cloud](./hosting-options/dedicated_cloud/) for more information. ### W&B Customer-Managed With this option, you can deploy and manage W&B Server on your own managed infrastructure. W&B Server is a self-contained packaged mechanism to run the W&B Platform & its supported W&B products. W&B recommends this option if all your existing infrastructure is on-prem, or your organization has strict regulatory needs that are not satisfied by W&B Dedicated Cloud. With this option, you are fully responsible to manage the provisioning, and continuous maintenance & upgrades of the infrastructure required to support W&B Server. -See [W&B Self Managed](./hosting-options/self-managed.md) for more information. +See [W&B Self Managed](./hosting-options/self-managed/) for more information. ## Next steps diff --git a/content/guides/hosting/data-security/data-encryption.md b/content/guides/hosting/data-security/data-encryption.md index a165f228a..9d5e02494 100644 --- a/content/guides/hosting/data-security/data-encryption.md +++ b/content/guides/hosting/data-security/data-encryption.md @@ -6,7 +6,7 @@ menu: title: Data encryption in Dedicated cloud --- -W&B uses a W&B-managed cloud-native key to encrypt the W&B-managed database and object storage in every [Dedicated cloud](../hosting-options/dedicated_cloud.md), by using the customer-managed encryption key (CMEK) capability in each cloud. In this case, W&B acts as a `customer` of the cloud provider, while providing the W&B platform as a service to you. Using a W&B-managed key means that W&B has control over the keys that it uses to encrypt the data in each cloud, thus doubling down on its promise to provide a highly safe and secure platform to all of its customers. +W&B uses a W&B-managed cloud-native key to encrypt the W&B-managed database and object storage in every [Dedicated cloud](../hosting-options/dedicated_cloud/), by using the customer-managed encryption key (CMEK) capability in each cloud. In this case, W&B acts as a `customer` of the cloud provider, while providing the W&B platform as a service to you. Using a W&B-managed key means that W&B has control over the keys that it uses to encrypt the data in each cloud, thus doubling down on its promise to provide a highly safe and secure platform to all of its customers. W&B uses a `unique key` to encrypt the data in each customer instance, providing another layer of isolation between Dedicated cloud tenants. The capability is available on AWS, Azure and GCP. diff --git a/content/guides/hosting/data-security/ip-allowlisting.md b/content/guides/hosting/data-security/ip-allowlisting.md index 67b2faf3b..96bd905a6 100644 --- a/content/guides/hosting/data-security/ip-allowlisting.md +++ b/content/guides/hosting/data-security/ip-allowlisting.md @@ -7,11 +7,11 @@ title: Configure IP allowlisting for Dedicated Cloud weight: 3 --- -You can restrict access to your [Dedicated Cloud](../hosting-options/dedicated_cloud.md) instance from only an authorized list of IP addresses. This applies to the access from your AI workloads to the W&B APIs and from your user browsers to the W&B app UI as well. Once IP allowlisting has been set up for your Dedicated Cloud instance, W&B denies any requests from other unauthorized locations. Reach out to your W&B team to configure IP allowlisting for your Dedicated Cloud instance. +You can restrict access to your [Dedicated Cloud](../hosting-options/dedicated_cloud/) instance from only an authorized list of IP addresses. This applies to the access from your AI workloads to the W&B APIs and from your user browsers to the W&B app UI as well. Once IP allowlisting has been set up for your Dedicated Cloud instance, W&B denies any requests from other unauthorized locations. Reach out to your W&B team to configure IP allowlisting for your Dedicated Cloud instance. IP allowlisting is available on Dedicated Cloud instances on AWS, GCP and Azure. -You can use IP allowlisting with [secure private connectivity](./private-connectivity.md). If you use IP allowlisting with secure private connectivity, W&B recommends using secure private connectivity for all traffic from your AI workloads and majority of the traffic from your user browsers if possible, while using IP allowlisting for instance administration from privileged locations. +You can use IP allowlisting with [secure private connectivity](./private-connectivity/). If you use IP allowlisting with secure private connectivity, W&B recommends using secure private connectivity for all traffic from your AI workloads and majority of the traffic from your user browsers if possible, while using IP allowlisting for instance administration from privileged locations. {{% alert color="secondary" %}} W&B strongly recommends to use [CIDR blocks](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) assigned to your corporate or business egress gateways rather than individual `/32` IP addresses. Using individual IP addresses is not scalable and has strict limits per cloud. diff --git a/content/guides/hosting/data-security/presigned-urls.md b/content/guides/hosting/data-security/presigned-urls.md index f37b366fc..12949b1d1 100644 --- a/content/guides/hosting/data-security/presigned-urls.md +++ b/content/guides/hosting/data-security/presigned-urls.md @@ -13,7 +13,7 @@ When needed, AI workloads or user browser clients within your network request pr ## Team-level access control -Each pre-signed URL is restricted to specific buckets based on [team level access control](../iam/manage-organization.md#add-and-manage-teams) in the W&B platform. If a user is part of a team which is mapped to a blob storage bucket using [secure storage connector](./secure-storage-connector.md), and if that user is part of only that team, then the pre-signed URLs generated for their requests would not have permissions to access blob storage buckets mapped to other teams. +Each pre-signed URL is restricted to specific buckets based on [team level access control](../iam/manage-organization.md#add-and-manage-teams) in the W&B platform. If a user is part of a team which is mapped to a blob storage bucket using [secure storage connector](./secure-storage-connector/), and if that user is part of only that team, then the pre-signed URLs generated for their requests would not have permissions to access blob storage buckets mapped to other teams. {{% alert %}} W&B recommends adding users to only the teams that they are supposed to be a part of. @@ -27,7 +27,7 @@ In case of AWS, one can use [VPC or IP address based network restriction](https: ## Audit logs -W&B also recommends to use [W&B audit logs](../monitoring-usage/audit-logging.md) in addition to blob storage specific audit logs. For latter, refer to [AWS S3 access logs](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerLogs.html),[Google Cloud Storage audit logs](https://cloud.google.com/storage/docs/audit-logging) and [Monitor Azure blob storage](https://learn.microsoft.com/en-us/azure/storage/blobs/monitor-blob-storage). Admin and security teams can use audit logs to keep track of which user is doing what in the W&B product and take necessary action if they determine that some operations need to be limited for certain users. +W&B also recommends to use [W&B audit logs](../monitoring-usage/audit-logging/) in addition to blob storage specific audit logs. For latter, refer to [AWS S3 access logs](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerLogs.html),[Google Cloud Storage audit logs](https://cloud.google.com/storage/docs/audit-logging) and [Monitor Azure blob storage](https://learn.microsoft.com/en-us/azure/storage/blobs/monitor-blob-storage). Admin and security teams can use audit logs to keep track of which user is doing what in the W&B product and take necessary action if they determine that some operations need to be limited for certain users. {{% alert %}} Pre-signed URLs are the only supported blob storage access mechanism in W&B. W&B recommends configuring some or all of the above list of security controls depending on your risk appetite. diff --git a/content/guides/hosting/data-security/private-connectivity.md b/content/guides/hosting/data-security/private-connectivity.md index 0e1ec0110..1fa4d62ca 100644 --- a/content/guides/hosting/data-security/private-connectivity.md +++ b/content/guides/hosting/data-security/private-connectivity.md @@ -7,7 +7,7 @@ title: Configure private connectivity to Dedicated Cloud weight: 4 --- -You can connect to your [Dedicated Cloud](../hosting-options/dedicated_cloud.md) instance over the cloud provider's secure private network. This applies to the access from your AI workloads to the W&B APIs and optionally from your user browsers to the W&B app UI as well. When using private connectivity, the relevant requests and responses do not transit through the public network or internet. +You can connect to your [Dedicated Cloud](../hosting-options/dedicated_cloud/) instance over the cloud provider's secure private network. This applies to the access from your AI workloads to the W&B APIs and optionally from your user browsers to the W&B app UI as well. When using private connectivity, the relevant requests and responses do not transit through the public network or internet. {{% alert %}} Secure private connectivity is available in preview as an advanced security option with Dedicated Cloud. @@ -25,4 +25,4 @@ Once enabled, W&B creates a private endpoint service for your instance and provi If you would like to use this feature, contact your W&B team. {{% /alert %}} -You can use secure private connectivity with [IP allowlisting](./ip-allowlisting.md). If you use secure private connectivity for IP allowlisting, W&B recommends that you secure private connectivity for all traffic from your AI workloads and majority of the traffic from your user browsers if possible, while using IP allowlisting for instance administration from privileged locations. \ No newline at end of file +You can use secure private connectivity with [IP allowlisting](./ip-allowlisting/). If you use secure private connectivity for IP allowlisting, W&B recommends that you secure private connectivity for all traffic from your AI workloads and majority of the traffic from your user browsers if possible, while using IP allowlisting for instance administration from privileged locations. \ No newline at end of file diff --git a/content/guides/hosting/data-security/secure-storage-connector.md b/content/guides/hosting/data-security/secure-storage-connector.md index e8c1148cf..0752ad50f 100644 --- a/content/guides/hosting/data-security/secure-storage-connector.md +++ b/content/guides/hosting/data-security/secure-storage-connector.md @@ -7,11 +7,11 @@ title: Bring your own bucket (BYOB) weight: 1 --- -Bring your own bucket (BYOB) allows you to store W&B artifacts and other related sensitive data in your own cloud or on-prem infrastructure. In case of [Dedicated cloud](../hosting-options/dedicated_cloud.md) or [SaaS Cloud](../hosting-options/saas_cloud.md), data that you store in your bucket is not copied to the W&B managed infrastructure. +Bring your own bucket (BYOB) allows you to store W&B artifacts and other related sensitive data in your own cloud or on-prem infrastructure. In case of [Dedicated cloud](../hosting-options/dedicated_cloud/) or [SaaS Cloud](../hosting-options/saas_cloud/), data that you store in your bucket is not copied to the W&B managed infrastructure. {{% alert %}} -* Communication between W&B SDK / CLI / UI and your buckets occurs using [pre-signed URLs](./presigned-urls.md). -* W&B uses a garbage collection process to delete W&B Artifacts. For more information, see [Deleting Artifacts](../../artifacts/delete-artifacts.md). +* Communication between W&B SDK / CLI / UI and your buckets occurs using [pre-signed URLs](./presigned-urls/). +* W&B uses a garbage collection process to delete W&B Artifacts. For more information, see [Deleting Artifacts](../../artifacts/delete-artifacts/). * You can specify a sub-path when configuring a bucket, to ensure that W&B does not store any files in a folder at the root of the bucket. It can help you better conform to your organzation's bucket governance policy. {{% /alert %}} @@ -26,7 +26,7 @@ You can configure your bucket at both the instance level and separately for one For example, suppose you have a team called Kappa in your organization. Your organization (and Team Kappa) use the Instance level storage bucket by default. Next, you create a team called Omega. When you create Team Omega, you configure a Team level storage bucket for that team. Files generated by Team Omega are not accessible by Team Kappa. However, files created by Team Kappa are accessible by Team Omega. If you want to isolate data for Team Kappa, you must configure a Team level storage bucket for them as well. {{% alert %}} -Team level storage bucket provides the same benefits for [Self-managed](../hosting-options/self-managed.md) instances, especially when different business units and departments share an instance to efficiently utilize the infrastructure and administrative resources. This also applies to firms that have separate project teams managing AI workflows for separate customer engagements. +Team level storage bucket provides the same benefits for [Self-managed](../hosting-options/self-managed/) instances, especially when different business units and departments share an instance to efficiently utilize the infrastructure and administrative resources. This also applies to firms that have separate project teams managing AI workflows for separate customer engagements. {{% /alert %}} ## Availability matrix @@ -44,7 +44,7 @@ Once you configure a instance or team level storage bucket for your Dedicated cl ## Cross-cloud or S3-compatible storage for team-level BYOB -You can connect to a cloud-native storage bucket in another cloud or to an S3-compatible storage bucket like [MinIO](https://github.com/minio/minio) for team-level BYOB in your [Dedicated cloud](../hosting-options/dedicated_cloud.md) or [Self-managed](../hosting-options/self-managed.md) instance. +You can connect to a cloud-native storage bucket in another cloud or to an S3-compatible storage bucket like [MinIO](https://github.com/minio/minio) for team-level BYOB in your [Dedicated cloud](../hosting-options/dedicated_cloud/) or [Self-managed](../hosting-options/self-managed/) instance. To enable the use of cross-cloud or S3-compatible storage, specify the storage bucket including the relevant access key in one of the following formats, using the `GORILLA_SUPPORTED_FILE_STORES` environment variable for your W&B instance. @@ -81,7 +81,7 @@ gs://:@ {{% alert %}} -Connectivity to S3-compatible storage for team-level BYOB is not available in [SaaS Cloud](../hosting-options/saas_cloud.md). Also, connectivity to an AWS bucket for team-level BYOB is cross-cloud in [SaaS Cloud](../hosting-options/saas_cloud.md), as that instance resides in GCP. That cross-cloud connectivity doesn't use the access key and environment variable based mechanism as outlined previously for [Dedicated cloud](../hosting-options/dedicated_cloud.md) and [Self-managed](../hosting-options/self-managed.md) instances. +Connectivity to S3-compatible storage for team-level BYOB is not available in [SaaS Cloud](../hosting-options/saas_cloud/). Also, connectivity to an AWS bucket for team-level BYOB is cross-cloud in [SaaS Cloud](../hosting-options/saas_cloud/), as that instance resides in GCP. That cross-cloud connectivity doesn't use the access key and environment variable based mechanism as outlined previously for [Dedicated cloud](../hosting-options/dedicated_cloud/) and [Self-managed](../hosting-options/self-managed/) instances. {{% /alert %}} Reach out to W&B Support at support@wandb.com for more information. @@ -134,10 +134,10 @@ W&B recommends that you use a Terraform module managed by W&B to provision a sto Replace `` and `` accordingly. - If you are using [SaaS Cloud](../hosting-options/saas_cloud.md) or [Dedicated cloud](../hosting-options/dedicated_cloud.md), replace `` with the corresponding value: + If you are using [SaaS Cloud](../hosting-options/saas_cloud/) or [Dedicated cloud](../hosting-options/dedicated_cloud/), replace `` with the corresponding value: - * For [SaaS Cloud](../hosting-options/saas_cloud.md): `arn:aws:iam::725579432336:role/WandbIntegration` - * For [Dedicated cloud](../hosting-options/dedicated_cloud.md): `arn:aws:iam::830241207209:root` + * For [SaaS Cloud](../hosting-options/saas_cloud/): `arn:aws:iam::725579432336:role/WandbIntegration` + * For [Dedicated cloud](../hosting-options/dedicated_cloud/): `arn:aws:iam::830241207209:root` This policy grants your AWS account full access to the key and also assigns the required permissions to the AWS account hosting the W&B Platform. Keep a record of the KMS Key ARN. @@ -172,7 +172,7 @@ W&B recommends that you use a Terraform module managed by W&B to provision a sto ] ``` - 5. Grant the required S3 permissions to the AWS account hosting the W&B Platform, which requires these permissions to generate [pre-signed URLs](./presigned-urls.md) that AI workloads in your cloud infrastructure or user browsers utilize to access the bucket. + 5. Grant the required S3 permissions to the AWS account hosting the W&B Platform, which requires these permissions to generate [pre-signed URLs](./presigned-urls/) that AI workloads in your cloud infrastructure or user browsers utilize to access the bucket. ```json { @@ -205,14 +205,14 @@ W&B recommends that you use a Terraform module managed by W&B to provision a sto } ``` - Replace `` accordingly and keep a record of the bucket name. If you are using [Dedicated cloud](../hosting-options/dedicated_cloud.md), share the bucket name with your W&B team in case of instance level BYOB. In case of team level BYOB on any deployment type, [configure the bucket while creating the team](#configure-byob-in-wb). + Replace `` accordingly and keep a record of the bucket name. If you are using [Dedicated cloud](../hosting-options/dedicated_cloud/), share the bucket name with your W&B team in case of instance level BYOB. In case of team level BYOB on any deployment type, [configure the bucket while creating the team](#configure-byob-in-wb). - If you are using [SaaS Cloud](../hosting-options/saas_cloud.md) or [Dedicated cloud](../hosting-options/dedicated_cloud.md), replace `` with the corresponding value. + If you are using [SaaS Cloud](../hosting-options/saas_cloud/) or [Dedicated cloud](../hosting-options/dedicated_cloud/), replace `` with the corresponding value. - * For [SaaS Cloud](../hosting-options/saas_cloud.md): `arn:aws:iam::725579432336:role/WandbIntegration` - * For [Dedicated cloud](../hosting-options/dedicated_cloud.md): `arn:aws:iam::830241207209:root` + * For [SaaS Cloud](../hosting-options/saas_cloud/): `arn:aws:iam::725579432336:role/WandbIntegration` + * For [Dedicated cloud](../hosting-options/dedicated_cloud/): `arn:aws:iam::830241207209:root` - For more details, see the [AWS self-managed hosting guide](../self-managed/aws-tf.md). + For more details, see the [AWS self-managed hosting guide](../self-managed/aws-tf/). {{% /tab %}} {{% tab header="GCP" value="gcp"%}} @@ -253,12 +253,12 @@ W&B recommends that you use a Terraform module managed by W&B to provision a sto gsutil cors get gs:// ``` -2. If you are using [SaaS Cloud](../hosting-options/saas_cloud.md) or [Dedicated cloud](../hosting-options/dedicated_cloud.md), grant the `Storage Admin` role to the GCP service account linked to the W&B Platform: +2. If you are using [SaaS Cloud](../hosting-options/saas_cloud/) or [Dedicated cloud](../hosting-options/dedicated_cloud/), grant the `Storage Admin` role to the GCP service account linked to the W&B Platform: - * For [SaaS Cloud](../hosting-options/saas_cloud.md), the account is: `wandb-integration@wandb-production.iam.gserviceaccount.com` - * For [Dedicated cloud](../hosting-options/dedicated_cloud.md) the account is: `deploy@wandb-production.iam.gserviceaccount.com` + * For [SaaS Cloud](../hosting-options/saas_cloud/), the account is: `wandb-integration@wandb-production.iam.gserviceaccount.com` + * For [Dedicated cloud](../hosting-options/dedicated_cloud/) the account is: `deploy@wandb-production.iam.gserviceaccount.com` - Keep a record of the bucket name. If you are using [Dedicated cloud](../hosting-options/dedicated_cloud.md), share the bucket name with your W&B team in case of instance level BYOB. In case of team level BYOB on any deployment type, [configure the bucket while creating the team](#configure-byob-in-wb). + Keep a record of the bucket name. If you are using [Dedicated cloud](../hosting-options/dedicated_cloud/), share the bucket name with your W&B team in case of instance level BYOB. In case of team level BYOB on any deployment type, [configure the bucket while creating the team](#configure-byob-in-wb). {{% /tab %}} {{% tab header="Azure" value="azure"%}} @@ -281,9 +281,9 @@ W&B recommends that you use a Terraform module managed by W&B to provision a sto | Exposed Headers | `*` | | Max Age | `3600` | -2. Generate a storage account access key, and keep a record of that along with the storage account name. If you are using [Dedicated cloud](../hosting-options/dedicated_cloud.md), share the storage account name and access key with your W&B team using a secure sharing mechanism. +2. Generate a storage account access key, and keep a record of that along with the storage account name. If you are using [Dedicated cloud](../hosting-options/dedicated_cloud/), share the storage account name and access key with your W&B team using a secure sharing mechanism. - For the team level BYOB, W&B recommends that you use [Terraform](https://github.com/wandb/terraform-azurerm-wandb/tree/main/modules/secure_storage_connector) to provision the Azure Blob Storage bucket along with the necessary access mechanism and permissions. If you use [Dedicated cloud](../hosting-options/dedicated_cloud.md), provide the OIDC issuer URL for your instance. Make a note of details that you need to [configure the bucket while creating the team](#configure-byob-in-wb): + For the team level BYOB, W&B recommends that you use [Terraform](https://github.com/wandb/terraform-azurerm-wandb/tree/main/modules/secure_storage_connector) to provision the Azure Blob Storage bucket along with the necessary access mechanism and permissions. If you use [Dedicated cloud](../hosting-options/dedicated_cloud/), provide the OIDC issuer URL for your instance. Make a note of details that you need to [configure the bucket while creating the team](#configure-byob-in-wb): * Storage account name * Storage container name @@ -298,7 +298,7 @@ W&B recommends that you use a Terraform module managed by W&B to provision a sto {{% tab header="Team level" value="team" %}} {{% alert %}} -If you're connecting to a cloud-native storage bucket in another cloud or to an S3-compatible storage bucket like [MinIO](https://github.com/minio/minio) for team-level BYOB in your [Dedicated cloud](../hosting-options/dedicated_cloud.md) or [Self-managed](../hosting-options/self-managed.md) instance, refer to [Cross-cloud or S3-compatible storage for team-level BYOB](#cross-cloud-or-s3-compatible-storage-for-team-level-byob). In such cases, you must specify the storage bucket using the `GORILLA_SUPPORTED_FILE_STORES` environment variable for your W&B instance, before you configure it for a team using the instructions below. +If you're connecting to a cloud-native storage bucket in another cloud or to an S3-compatible storage bucket like [MinIO](https://github.com/minio/minio) for team-level BYOB in your [Dedicated cloud](../hosting-options/dedicated_cloud/) or [Self-managed](../hosting-options/self-managed/) instance, refer to [Cross-cloud or S3-compatible storage for team-level BYOB](#cross-cloud-or-s3-compatible-storage-for-team-level-byob). In such cases, you must specify the storage bucket using the `GORILLA_SUPPORTED_FILE_STORES` environment variable for your W&B instance, before you configure it for a team using the instructions below. {{% /alert %}} To configure a storage bucket at the team level when you create a W&B Team: @@ -310,11 +310,11 @@ To configure a storage bucket at the team level when you create a W&B Team: Multiple W&B Teams can use the same cloud storage bucket. To enable this, select an existing cloud storage bucket from the dropdown. 4. From the **Cloud provider** dropdown, select your cloud provider. -5. Provide the name of your storage bucket for the **Name** field. If you have a [Dedicated cloud](../hosting-options/dedicated_cloud.md) or [Self-managed](../hosting-options/self-managed.md) instance on Azure, provide the values for **Account name** and **Container name** fields. +5. Provide the name of your storage bucket for the **Name** field. If you have a [Dedicated cloud](../hosting-options/dedicated_cloud/) or [Self-managed](../hosting-options/self-managed/) instance on Azure, provide the values for **Account name** and **Container name** fields. 6. (Optional) Provide the bucket sub-path in the optional **Path** field. Do this if you would not like W&B to store any files in a folder at the root of the bucket. 7. (Optional if using AWS bucket) Provide the ARN of your KMS encryption key for the **KMS key ARN** field. 8. (Optional if using Azure bucket) Provide the values for the **Tenant ID** and the **Managed Identity Client ID** fields. -9. (Optional on [SaaS Cloud](../hosting-options/saas_cloud.md)) Optionally invite team members when creating the team. +9. (Optional on [SaaS Cloud](../hosting-options/saas_cloud/)) Optionally invite team members when creating the team. 10. Press the **Create Team** button. {{< img src="/images/hosting/prod_setup_secure_storage.png" alt="" >}} diff --git a/content/guides/hosting/env-vars.md b/content/guides/hosting/env-vars.md index a3cb475c5..225d1cf62 100644 --- a/content/guides/hosting/env-vars.md +++ b/content/guides/hosting/env-vars.md @@ -8,7 +8,7 @@ title: Configure environment variables weight: 7 --- -In addition to configuring instance level settings via the System Settings admin UI, W&B also provides a way to configure these values via code using Environment Variables. Also, refer to [advanced configuration for IAM](./iam/advanced_env_vars.md). +In addition to configuring instance level settings via the System Settings admin UI, W&B also provides a way to configure these values via code using Environment Variables. Also, refer to [advanced configuration for IAM](./iam/advanced_env_vars/). ## Environment variable reference diff --git a/content/guides/hosting/hosting-options/dedicated_cloud/_index.md b/content/guides/hosting/hosting-options/dedicated_cloud/_index.md index 1f8785aac..6455e59fd 100644 --- a/content/guides/hosting/hosting-options/dedicated_cloud/_index.md +++ b/content/guides/hosting/hosting-options/dedicated_cloud/_index.md @@ -11,38 +11,38 @@ url: guides/hosting/hosting-options/dedicated_cloud W&B Dedicated Cloud is a single-tenant, fully managed platform deployed in W&B's AWS, GCP or Azure cloud accounts. Each Dedicated Cloud instance has its own isolated network, compute and storage from other W&B Dedicated Cloud instances. Your W&B specific metadata and data is stored in an isolated cloud storage and is processed using isolated cloud compute services. -W&B Dedicated Cloud is available in [multiple global regions for each cloud provider](./dedicated_regions.md) +W&B Dedicated Cloud is available in [multiple global regions for each cloud provider](./dedicated_regions/) ## Data security -You can bring your own bucket (BYOB) using the [secure storage connector](../data-security/secure-storage-connector.md) at the [instance and team levels](../data-security/secure-storage-connector.md#configuration-options) to store your files such as models, datasets, and more. +You can bring your own bucket (BYOB) using the [secure storage connector](../data-security/secure-storage-connector/) at the [instance and team levels](../data-security/secure-storage-connector.md#configuration-options) to store your files such as models, datasets, and more. Similar to W&B Multi-tenant Cloud, you can configure a single bucket for multiple teams or you can use separate buckets for different teams. If you do not configure secure storage connector for a team, that data is stored in the instance level bucket. {{< img src="/images/hosting/dedicated_cloud_arch.png" alt="" >}} -In addition to BYOB with secure storage connector, you can utilize [IP allowlisting](../data-security/ip-allowlisting.md) to restrict access to your Dedicated Cloud instance from only trusted network locations. +In addition to BYOB with secure storage connector, you can utilize [IP allowlisting](../data-security/ip-allowlisting/) to restrict access to your Dedicated Cloud instance from only trusted network locations. -You can also privately connect to your Dedicated Cloud instance using [cloud provider's secure connectivity solution](../data-security/private-connectivity.md). +You can also privately connect to your Dedicated Cloud instance using [cloud provider's secure connectivity solution](../data-security/private-connectivity/). ## Identity and access management (IAM) Use the identity and access management capabilities for secure authentication and effective authorization in your W&B Organization. The following features are available for IAM in Dedicated Cloud instances: -* Authenticate with [SSO using OpenID Connect (OIDC)](../iam/sso.md) or with [LDAP](../iam/ldap.md). +* Authenticate with [SSO using OpenID Connect (OIDC)](../iam/sso/) or with [LDAP](../iam/ldap/). * [Configure appropriate user roles](../iam/manage-organization.md#assign-or-update-a-users-role) at the scope of the organization and within a team. -* Define the scope of a W&B project to limit who can view, edit, and submit W&B runs to it with [restricted projects](../iam/restricted-projects.md). -* Leverage JSON Web Tokens with [identity federation](../iam/identity_federation.md) to access W&B APIs. +* Define the scope of a W&B project to limit who can view, edit, and submit W&B runs to it with [restricted projects](../iam/restricted-projects/). +* Leverage JSON Web Tokens with [identity federation](../iam/identity_federation/) to access W&B APIs. ## Monitor -Use [Audit logs](../monitoring-usage/audit-logging.md) to track user activity within your teams and to conform to your enterprise governance requirements. Also, you can view organization usage in our Dedicated Cloud instance with [W&B Organization Dashboard](../monitoring-usage/org_dashboard.md). +Use [Audit logs](../monitoring-usage/audit-logging/) to track user activity within your teams and to conform to your enterprise governance requirements. Also, you can view organization usage in our Dedicated Cloud instance with [W&B Organization Dashboard](../monitoring-usage/org_dashboard/). ## Maintenance Similar to W&B Multi-tenant Cloud, you do not incur the overhead and costs of provisioning and maintaining the W&B platform with Dedicated Cloud. -To understand how W&B manages updates on Dedicated Cloud, refer to the [server release process](../server-release-process.md). +To understand how W&B manages updates on Dedicated Cloud, refer to the [server release process](../server-release-process/). ## Compliance @@ -50,7 +50,7 @@ Security controls for W&B Dedicated Cloud are periodically audited internally an ## Migration options -Migration to Dedicated Cloud from a [Self-managed instance](./self-managed.md) or [Multi-tenant Cloud](./saas_cloud.md) is supported. +Migration to Dedicated Cloud from a [Self-managed instance](./self-managed/) or [Multi-tenant Cloud](./saas_cloud/) is supported. ## Next steps diff --git a/content/guides/hosting/hosting-options/saas_cloud.md b/content/guides/hosting/hosting-options/saas_cloud.md index 0cbf42158..d64ba83ba 100644 --- a/content/guides/hosting/hosting-options/saas_cloud.md +++ b/content/guides/hosting/hosting-options/saas_cloud.md @@ -15,14 +15,14 @@ W&B Multi-tenant Cloud is a fully managed platform deployed in W&B's Google Clou For non enterprise plan users, all data is only stored in the shared cloud storage and is processed with shared cloud compute services. Depending on your pricing plan, you may be subject to storage limits. -Enterprise plan users can [bring their own bucket (BYOB) using the secure storage connector](../data-security/secure-storage-connector.md) at the [team level](../data-security/secure-storage-connector.md#configuration-options) to store their files such as models, datasets, and more. You can configure a single bucket for multiple teams or you can use separate buckets for different W&B Teams. If you do not configure secure storage connector for a team, that data is stored in the shared cloud storage. +Enterprise plan users can [bring their own bucket (BYOB) using the secure storage connector](../data-security/secure-storage-connector/) at the [team level](../data-security/secure-storage-connector.md#configuration-options) to store their files such as models, datasets, and more. You can configure a single bucket for multiple teams or you can use separate buckets for different W&B Teams. If you do not configure secure storage connector for a team, that data is stored in the shared cloud storage. ## Identity and access management (IAM) If you are on enterprise plan, you can use the identity and access managements capabilities for secure authentication and effective authorization in your W&B Organization. The following features are available for IAM in Multi-tenant Cloud: * SSO authentication with OIDC or SAML. Reach out to your W&B team or support if you would like to configure SSO for your organization. * [Configure appropriate user roles](../iam/manage-organization.md#assign-or-update-a-users-role) at the scope of the organization and within a team. -* Define the scope of a W&B project to limit who can view, edit, and submit W&B runs to it with [restricted projects](../iam/restricted-projects.md). +* Define the scope of a W&B project to limit who can view, edit, and submit W&B runs to it with [restricted projects](../iam/restricted-projects/). ## Monitor Organization admins can manage usage and billing for their account from the `Billing` tab in their account view. If using the shared cloud storage on Multi-tenant Cloud, an admin can optimize storage usage across different teams in their organization. diff --git a/content/guides/hosting/hosting-options/self-managed/_index.md b/content/guides/hosting/hosting-options/self-managed/_index.md index 7c8f7bc37..081cd6b12 100644 --- a/content/guides/hosting/hosting-options/self-managed/_index.md +++ b/content/guides/hosting/hosting-options/self-managed/_index.md @@ -13,7 +13,7 @@ cascade: ## Use self-managed cloud or on-prem infrastructure {{% alert %}} -W&B recommends fully managed deployment options such as [W&B Multi-tenant Cloud](./saas_cloud.md) or [W&B Dedicated Cloud](./dedicated_cloud.md) deployment types. W&B fully managed services are simple and secure to use, with minimum to no configuration required. +W&B recommends fully managed deployment options such as [W&B Multi-tenant Cloud](./saas_cloud/) or [W&B Dedicated Cloud](./dedicated_cloud/) deployment types. W&B fully managed services are simple and secure to use, with minimum to no configuration required. {{% /alert %}} Deploy W&B Server on your [AWS, GCP, or Azure cloud account](#deploy-wb-server-within-self-managed-cloud-accounts) or within your [on-premises infrastructure](#deploy-wb-server-in-on-prem-infrastructure). @@ -26,7 +26,7 @@ Your IT/DevOps/MLOps team is responsible for provisioning your deployment, manag W&B recommends that you use official W&B Terraform scripts to deploy W&B Server into your AWS, GCP, or Azure cloud account. -See specific cloud provider documentation for more information on how to set up W&B Server in [AWS](../self-managed/aws-tf.md), [GCP](../self-managed/gcp-tf.md) or [Azure](../self-managed/azure-tf.md). +See specific cloud provider documentation for more information on how to set up W&B Server in [AWS](../self-managed/aws-tf/), [GCP](../self-managed/gcp-tf/) or [Azure](../self-managed/azure-tf/). ## Deploy W&B Server in on-prem infrastructure @@ -37,7 +37,7 @@ You need to configure several infrastructure components in order to set up W&B S - Amazon S3-compatible object storage - Redis cache cluster -See [Install on on-prem infrastructure](../self-managed/bare-metal.md) for more information on how to install W&B Server on your on-prem infrastructure. W&B can provide recommendations for the different components and provide guidance through the installation process. +See [Install on on-prem infrastructure](../self-managed/bare-metal/) for more information on how to install W&B Server on your on-prem infrastructure. W&B can provide recommendations for the different components and provide guidance through the installation process. ## Deploy W&B Server on a custom cloud platform diff --git a/content/guides/hosting/hosting-options/self-managed/bare-metal.md b/content/guides/hosting/hosting-options/self-managed/bare-metal.md index aa6f45064..11dbdcb89 100644 --- a/content/guides/hosting/hosting-options/self-managed/bare-metal.md +++ b/content/guides/hosting/hosting-options/self-managed/bare-metal.md @@ -8,7 +8,7 @@ title: Deploy W&B Platform On-premises --- {{% alert %}} -W&B recommends fully managed deployment options such as [W&B Multi-tenant Cloud](../hosting-options/saas_cloud.md) or [W&B Dedicated Cloud](../hosting-options//dedicated_cloud.md) deployment types. W&B fully managed services are simple and secure to use, with minimum to no configuration required. +W&B recommends fully managed deployment options such as [W&B Multi-tenant Cloud](../hosting-options/saas_cloud/) or [W&B Dedicated Cloud](../hosting-options//dedicated_cloud/) deployment types. W&B fully managed services are simple and secure to use, with minimum to no configuration required. {{% /alert %}} @@ -78,7 +78,7 @@ Some tested and working providers: #### Secure Storage Connector -The [Secure Storage Connector](../data-security/secure-storage-connector.md) is not available for teams at this time for bare metal deployments. +The [Secure Storage Connector](../data-security/secure-storage-connector/) is not available for teams at this time for bare metal deployments. ## MySQL database diff --git a/content/guides/hosting/hosting-options/self-managed/install-on-public-cloud/aws-tf.md b/content/guides/hosting/hosting-options/self-managed/install-on-public-cloud/aws-tf.md index dca0b61fe..c59548c5d 100644 --- a/content/guides/hosting/hosting-options/self-managed/install-on-public-cloud/aws-tf.md +++ b/content/guides/hosting/hosting-options/self-managed/install-on-public-cloud/aws-tf.md @@ -9,7 +9,7 @@ weight: 10 --- {{% alert %}} -W&B recommends fully managed deployment options such as [W&B Multi-tenant Cloud](../hosting-options/saas_cloud.md) or [W&B Dedicated Cloud](../hosting-options//dedicated_cloud.md) deployment types. W&B fully managed services are simple and secure to use, with minimum to no configuration required. +W&B recommends fully managed deployment options such as [W&B Multi-tenant Cloud](../hosting-options/saas_cloud/) or [W&B Dedicated Cloud](../hosting-options//dedicated_cloud/) deployment types. W&B fully managed services are simple and secure to use, with minimum to no configuration required. {{% /alert %}} W&B recommends using the [W&B Server AWS Terraform Module](https://registry.terraform.io/modules/wandb/wandb/aws/latest) to deploy the platform on AWS. diff --git a/content/guides/hosting/hosting-options/self-managed/install-on-public-cloud/azure-tf.md b/content/guides/hosting/hosting-options/self-managed/install-on-public-cloud/azure-tf.md index 3664cbeb3..ff49f85a8 100644 --- a/content/guides/hosting/hosting-options/self-managed/install-on-public-cloud/azure-tf.md +++ b/content/guides/hosting/hosting-options/self-managed/install-on-public-cloud/azure-tf.md @@ -9,7 +9,7 @@ weight: 30 --- {{% alert %}} -W&B recommends fully managed deployment options such as [W&B Multi-tenant Cloud](../hosting-options/saas_cloud.md) or [W&B Dedicated Cloud](../hosting-options//dedicated_cloud.md) deployment types. W&B fully managed services are simple and secure to use, with minimum to no configuration required. +W&B recommends fully managed deployment options such as [W&B Multi-tenant Cloud](../hosting-options/saas_cloud/) or [W&B Dedicated Cloud](../hosting-options//dedicated_cloud/) deployment types. W&B fully managed services are simple and secure to use, with minimum to no configuration required. {{% /alert %}} diff --git a/content/guides/hosting/hosting-options/self-managed/install-on-public-cloud/gcp-tf.md b/content/guides/hosting/hosting-options/self-managed/install-on-public-cloud/gcp-tf.md index e592d98d7..31bcce5c8 100644 --- a/content/guides/hosting/hosting-options/self-managed/install-on-public-cloud/gcp-tf.md +++ b/content/guides/hosting/hosting-options/self-managed/install-on-public-cloud/gcp-tf.md @@ -9,7 +9,7 @@ weight: 20 --- {{% alert %}} -W&B recommends fully managed deployment options such as [W&B Multi-tenant Cloud](../hosting-options/saas_cloud.md) or [W&B Dedicated Cloud](../hosting-options//dedicated_cloud.md) deployment types. W&B fully managed services are simple and secure to use, with minimum to no configuration required. +W&B recommends fully managed deployment options such as [W&B Multi-tenant Cloud](../hosting-options/saas_cloud/) or [W&B Dedicated Cloud](../hosting-options//dedicated_cloud/) deployment types. W&B fully managed services are simple and secure to use, with minimum to no configuration required. {{% /alert %}} diff --git a/content/guides/hosting/hosting-options/self-managed/install-on-public-cloud/ref-arch.md b/content/guides/hosting/hosting-options/self-managed/install-on-public-cloud/ref-arch.md index 4b70e9aa9..e45ff620c 100644 --- a/content/guides/hosting/hosting-options/self-managed/install-on-public-cloud/ref-arch.md +++ b/content/guides/hosting/hosting-options/self-managed/install-on-public-cloud/ref-arch.md @@ -21,7 +21,7 @@ Consider carefully whether a self-managed approach with W&B is suitable for your A strong understanding of how to run and maintain production-grade application is an important prerequisite before you deploy self-managed W&B. If your team needs assistance, our Professional Services team and partners offer support for implementation and optimization. -To learn more about managed solutions for running W&B instead of managing it yourself, refer to [W&B Multi-tenant Cloud](../hosting-options/saas_cloud.md) and [W&B Dedicated Cloud](../hosting-options/dedicated_cloud.md). +To learn more about managed solutions for running W&B instead of managing it yourself, refer to [W&B Multi-tenant Cloud](../hosting-options/saas_cloud/) and [W&B Dedicated Cloud](../hosting-options/dedicated_cloud/). ## Infrastructure @@ -38,7 +38,7 @@ The storage layer consists of a MySQL database and object storage. The MySQL dat ## Infrastructure requirements ### Kubernetes -The W&B Server application is deployed as a [Kubernetes Operator](../operator.md) that deploys multiple Pods. For this reason, W&B requires a Kubernetes cluster with: +The W&B Server application is deployed as a [Kubernetes Operator](../operator/) that deploys multiple Pods. For this reason, W&B requires a Kubernetes cluster with: - A fully configured and functioning Ingress controller - The capability to provision Persistent Volumes. diff --git a/content/guides/hosting/hosting-options/self-managed/kubernetes-operator/_index.md b/content/guides/hosting/hosting-options/self-managed/kubernetes-operator/_index.md index 511cb12cb..72ede16d2 100644 --- a/content/guides/hosting/hosting-options/self-managed/kubernetes-operator/_index.md +++ b/content/guides/hosting/hosting-options/self-managed/kubernetes-operator/_index.md @@ -62,7 +62,7 @@ Satisfy the following requirements to deploy W&B with the W&B Kubernetes operato Refer to the [reference architecture](./self-managed/ref-arch.md#infrastructure-requirements). In addition, [obtain a valid W&B Server license](./hosting-options/self-managed.md#obtain-your-wb-server-license). -See [this](./self-managed/bare-metal.md) guide for a detailed explanation on how to set up and configure a self-managed installation. +See [this](./self-managed/bare-metal/) guide for a detailed explanation on how to set up and configure a self-managed installation. Depending on the installation method, you might need to meet the following requirements: * Kubectl installed and configured with the correct Kubernetes cluster context. @@ -168,7 +168,7 @@ For a detailed description on how to use these modules, refer to this [section]( ### Verify the installation -To verify the installation, W&B recommends using the [W&B CLI](../../ref/cli/README.md). The verify command executes several tests that verify all components and configurations. +To verify the installation, W&B recommends using the [W&B CLI](../../ref/cli/README/). The verify command executes several tests that verify all components and configurations. {{% alert %}} This step assumes that the first admin user account is created with the browser. @@ -349,7 +349,7 @@ Follow these steps to migrate to the Operator-based Helm chart: This section describes the configuration options for W&B Server application. The application receives its configuration as custom resource definition named [WeightsAndBiases](#how-it-works). Some configuration options are exposed with the below configuration, some need to be set as environment variables. -The documentation has two lists of environment variables: [basic](./env-vars.md) and [advanced](./iam/advanced_env_vars.md). Only use environment variables if the configuration option that you need are not exposed using Helm Chart. +The documentation has two lists of environment variables: [basic](./env-vars/) and [advanced](./iam/advanced_env_vars/). Only use environment variables if the configuration option that you need are not exposed using Helm Chart. The W&B Server application configuration file for a production deployment requires the following contents. This YAML file defines the desired state of your W&B deployment, including the version, environment variables, external resources like databases, and other necessary settings. diff --git a/content/guides/hosting/hosting-options/self-managed/server-upgrade-process.md b/content/guides/hosting/hosting-options/self-managed/server-upgrade-process.md index c0de6bccc..33a4e4fc3 100644 --- a/content/guides/hosting/hosting-options/self-managed/server-upgrade-process.md +++ b/content/guides/hosting/hosting-options/self-managed/server-upgrade-process.md @@ -95,7 +95,7 @@ Update your license and version with Terraform. The proceeding table lists W&B m --reuse-values --set license=$LICENSE --set image.tag=$TAG ``` -For more details, see the [upgrade guide](https://github.com/wandb/helm-charts/blob/main/upgrade.md) in the public repository. +For more details, see the [upgrade guide](https://github.com/wandb/helm-charts/blob/main/upgrade/) in the public repository. ## Update with admin UI diff --git a/content/guides/hosting/iam/_index.md b/content/guides/hosting/iam/_index.md index ed444bb4c..14843a292 100644 --- a/content/guides/hosting/iam/_index.md +++ b/content/guides/hosting/iam/_index.md @@ -16,11 +16,11 @@ W&B Platform has three IAM scopes within W&B: [Organizations](#organization), [T An *Organization* is the root scope in your W&B account or instance. All actions in your account or instance take place within the context of that root scope, including managing users, managing teams, managing projects within teams, tracking usage and more. -If you are using [Multi-tenant Cloud](../hosting-options/saas_cloud.md), you may have more than one organization where each may correspond to a business unit, a personal user, a joint partnership with another business and more. +If you are using [Multi-tenant Cloud](../hosting-options/saas_cloud/), you may have more than one organization where each may correspond to a business unit, a personal user, a joint partnership with another business and more. -If you are using [Dedicated Cloud](../hosting-options/dedicated_cloud.md) or a [Self-managed instance](../hosting-options/self-managed.md), it corresponds to one organization. Your company may have more than one of Dedicated Cloud or Self-managed instances to map to different business units or departments, though that is strictly an optional way to manage AI practioners across your businesses or departments. +If you are using [Dedicated Cloud](../hosting-options/dedicated_cloud/) or a [Self-managed instance](../hosting-options/self-managed/), it corresponds to one organization. Your company may have more than one of Dedicated Cloud or Self-managed instances to map to different business units or departments, though that is strictly an optional way to manage AI practioners across your businesses or departments. -For more information, see [Manage orrganizations](./manage-organization.md). +For more information, see [Manage orrganizations](./manage-organization/). ## Team @@ -35,4 +35,4 @@ For more information, see [Add and manage teams](./manage-organization.md#add-an A *Project* is a subscope within a team, that maps to an actual AI project with specific intended outcomes. You may have more than one project within a team. Each project has a visibility mode which determines who can access it. -Every project is comprised of [Workspaces](../../track/workspaces.md) and [Reports](../../reports/intro.md), and is linked to relevant [Artifacts](../../artifacts/intro.md), [Sweeps](../../sweeps/intro.md), [Launch Jobs](../../launch/intro.md) and [Automations](../../artifacts/project-scoped-automations.md). \ No newline at end of file +Every project is comprised of [Workspaces](../../track/workspaces/) and [Reports](../../reports/intro/), and is linked to relevant [Artifacts](../../artifacts/intro/), [Sweeps](../../sweeps/intro/), [Launch Jobs](../../launch/intro/) and [Automations](../../artifacts/project-scoped-automations/). \ No newline at end of file diff --git a/content/guides/hosting/iam/access-management/_index.md b/content/guides/hosting/iam/access-management/_index.md index bd94ec5c5..5580845ff 100644 --- a/content/guides/hosting/iam/access-management/_index.md +++ b/content/guides/hosting/iam/access-management/_index.md @@ -37,4 +37,4 @@ Define the scope of a W&B project to limit who can view, edit, and submit W&B ru An organization admin, team admin, or the owner of a project can both set and edit a project's visibility. -For more information, see [Project visibility](./restricted-projects.md). \ No newline at end of file +For more information, see [Project visibility](./restricted-projects/). \ No newline at end of file diff --git a/content/guides/hosting/iam/access-management/manage-organization.md b/content/guides/hosting/iam/access-management/manage-organization.md index b27b08c8d..fb94903eb 100644 --- a/content/guides/hosting/iam/access-management/manage-organization.md +++ b/content/guides/hosting/iam/access-management/manage-organization.md @@ -14,7 +14,7 @@ As a team administrator you can [manage teams](#add-and-manage-teams). The following workflow applies to users with instance administrator roles. Reach out to an administrator in your organization if you believe you should have instance administrator permissions. {{% /alert %}} -If you are looking to simplify user management in your organization, refer to [Automate user and team management](./automate_iam.md). +If you are looking to simplify user management in your organization, refer to [Automate user and team management](./automate_iam/). @@ -85,7 +85,7 @@ A W&B user with matching email domain can sign in to your W&B Organization with {{% alert title="Enable SSO for authentication" %}} W&B strongly recommends and encourages that users authenticate using Single Sign-On (SSO). Reach out to your W&B team to enable SSO for your organization. -To learn more about how to setup SSO with Dedicated cloud or Self-managed instances, refer to [SSO with OIDC](./sso.md) or [SSO with LDAP](./ldap.md).{{% /alert %}} +To learn more about how to setup SSO with Dedicated cloud or Self-managed instances, refer to [SSO with OIDC](./sso/) or [SSO with LDAP](./ldap/).{{% /alert %}} W&B assigned auto-provisioning users "Member" roles by default. You can change the role of auto-provisioned users at any time. @@ -226,7 +226,7 @@ Use your organization's dashboard to create and manage teams within your organi - Manage team storage with the team's dashboard at `https://wandb.ai/`. - + ### Create a team diff --git a/content/guides/hosting/iam/advanced_env_vars.md b/content/guides/hosting/iam/advanced_env_vars.md index 865d82855..7cecd260f 100644 --- a/content/guides/hosting/iam/advanced_env_vars.md +++ b/content/guides/hosting/iam/advanced_env_vars.md @@ -6,7 +6,7 @@ menu: title: Advanced IAM configuration --- -In addition to basic [environment variables](../env-vars.md), you can use environment variables to configure IAM options for your [Dedicated Cloud](../hosting-options/dedicated_cloud.md) or [Self-managed](../hosting-options/self-managed.md) instance. +In addition to basic [environment variables](../env-vars/), you can use environment variables to configure IAM options for your [Dedicated Cloud](../hosting-options/dedicated_cloud/) or [Self-managed](../hosting-options/self-managed/) instance. Choose any of the following environment variables for your instance depending on your IAM needs. diff --git a/content/guides/hosting/iam/authentication/identity_federation.md b/content/guides/hosting/iam/authentication/identity_federation.md index f22fe5097..8e41e3a1c 100644 --- a/content/guides/hosting/iam/authentication/identity_federation.md +++ b/content/guides/hosting/iam/authentication/identity_federation.md @@ -49,7 +49,7 @@ As part of the workflow to exchange the JWT for a W&B access token and then acce * The JWT signature is verified using the JWKS at the W&B organization level. This is the first line of defense, and if this fails, that means there's a problem with your JWKS or how your JWT is signed. * The `iss` claim in the JWT should be equal to the issuer URL configured at the organization level. * The `sub` claim in the JWT should be equal to the user's email address as configured in the W&B organization. -* The `aud` claim in the JWT should be equal to the name of the W&B organization which houses the project that you are accessing as part of your AI workflow. In case of [Dedicated Cloud](../hosting-options/dedicated_cloud.md) or [Self-managed](../hosting-options/self-managed.md) instances, you could configure an instance-level environment variable `SKIP_AUDIENCE_VALIDATION` to `true` to skip validation of the audience claim, or use `wandb` as the audience. +* The `aud` claim in the JWT should be equal to the name of the W&B organization which houses the project that you are accessing as part of your AI workflow. In case of [Dedicated Cloud](../hosting-options/dedicated_cloud/) or [Self-managed](../hosting-options/self-managed/) instances, you could configure an instance-level environment variable `SKIP_AUDIENCE_VALIDATION` to `true` to skip validation of the audience claim, or use `wandb` as the audience. * The `exp` claim in the JWT is checked to see if the token is valid or has expired and needs to be refreshed. ## External service accounts diff --git a/content/guides/hosting/iam/authentication/service-accounts.md b/content/guides/hosting/iam/authentication/service-accounts.md index 32ce5d354..d2b396f0f 100644 --- a/content/guides/hosting/iam/authentication/service-accounts.md +++ b/content/guides/hosting/iam/authentication/service-accounts.md @@ -11,10 +11,10 @@ A service account represents a non-human or machine user that can automatically A service account's API key allows the caller to read from or write to projects within the service account's scope. -Service accounts allow for centralized management of workflows by multiple users or teams, to automate experiment tracking for W&B Models or to log traces for W&B Weave. You have the option to associate a human user's identity with a workflow managed by a service account, by using either of the [environment variables](../../track/environment-variables.md) `WANDB_USERNAME` or `WANDB_USER_EMAIL`. +Service accounts allow for centralized management of workflows by multiple users or teams, to automate experiment tracking for W&B Models or to log traces for W&B Weave. You have the option to associate a human user's identity with a workflow managed by a service account, by using either of the [environment variables](../../track/environment-variables/) `WANDB_USERNAME` or `WANDB_USER_EMAIL`. {{% alert %}} -Service accounts are available on [Dedicated Cloud](../hosting-options/dedicated_cloud.md), [Self-managed instances](../hosting-options/self-managed.md) with an enterprise license, and enterprise accounts in [SaaS Cloud](../hosting-options/saas_cloud.md). +Service accounts are available on [Dedicated Cloud](../hosting-options/dedicated_cloud/), [Self-managed instances](../hosting-options/self-managed/) with an enterprise license, and enterprise accounts in [SaaS Cloud](../hosting-options/saas_cloud/). {{% /alert %}} ## Organization-scoped service accounts diff --git a/content/guides/hosting/iam/authentication/sso.md b/content/guides/hosting/iam/authentication/sso.md index 2e6b20d93..a7a0b815f 100644 --- a/content/guides/hosting/iam/authentication/sso.md +++ b/content/guides/hosting/iam/authentication/sso.md @@ -20,9 +20,9 @@ The ID token is a JWT that contains the user's identity information, such as the In the context of W&B Server, access tokens authorize requests to APIs on behalf of the user, but since W&B Server’s primary concern is user authentication and identity, it only requires the ID token. -You can use environment variables to [configure IAM options](advanced_env_vars.md) for your [Dedicated cloud](../hosting-options/dedicated_cloud.md) or [Self-managed](../hosting-options/self-managed.md) instance. +You can use environment variables to [configure IAM options](advanced_env_vars/) for your [Dedicated cloud](../hosting-options/dedicated_cloud/) or [Self-managed](../hosting-options/self-managed/) instance. -To assist with configuring Identity Providers for [Dedicated cloud](../hosting-options/dedicated_cloud.md) or [Self-managed](../hosting-options/self-managed.md) W&B Server installations, follow these guidelines to follow for various IdPs. If you’re using the SaaS version of W&B, reach out to [support@wandb.com](mailto:support@wandb.com) for assistance in configuring an Auth0 tenant for your organization. +To assist with configuring Identity Providers for [Dedicated cloud](../hosting-options/dedicated_cloud/) or [Self-managed](../hosting-options/self-managed/) W&B Server installations, follow these guidelines to follow for various IdPs. If you’re using the SaaS version of W&B, reach out to [support@wandb.com](mailto:support@wandb.com) for assistance in configuring an Auth0 tenant for your organization. {{< tabpane text=true >}} {{% tab header="Cognito" value="cognito" %}} @@ -176,7 +176,7 @@ To set up SSO, you need administrator privileges and the following information: Should your IdP require a OIDC Client Secret, specify it with the environment variable OIDC_SECRET. {{% /alert %}} -You can configure SSO using either the W&B Server UI or by passing [environment variables](../env-vars.md) to the `wandb/local` pod. The environment variables take precedence over UI. +You can configure SSO using either the W&B Server UI or by passing [environment variables](../env-vars/) to the `wandb/local` pod. The environment variables take precedence over UI. {{% alert %}} If you're unable to log in to your instance after configuring SSO, you can restart the instance with the `LOCAL_RESTORE=true` environment variable set. This outputs a temporary password to the containers logs and disables SSO. Once you've resolved any issues with SSO, you must remove that environment variable to enable SSO again. @@ -184,7 +184,7 @@ If you're unable to log in to your instance after configuring SSO, you can resta {{< tabpane text=true >}} {{% tab header="System Console" value="console" %}} -The System Console is the successor to the System Settings page. It is available with the [W&B Kubernetes Operator](../operator.md) based deployment. +The System Console is the successor to the System Settings page. It is available with the [W&B Kubernetes Operator](../operator/) based deployment. 1. Refer to [Access the W&B Management Console](../operator.md#access-the-wb-management-console). diff --git a/content/guides/hosting/iam/automate_iam.md b/content/guides/hosting/iam/automate_iam.md index f4d57fcaf..87da70dd4 100644 --- a/content/guides/hosting/iam/automate_iam.md +++ b/content/guides/hosting/iam/automate_iam.md @@ -54,7 +54,7 @@ Update the inherited role for a custom role with the `PUT Role` endpoint. This o ## W&B Python SDK API -Just like how SCIM API allows you to automate user and team management, you can also use some of the methods available in the [W&B Python SDK API](../../../ref/python/public-api/api.md) for that purpose. Keep a note of the following methods: +Just like how SCIM API allows you to automate user and team management, you can also use some of the methods available in the [W&B Python SDK API](../../../ref/python/public-api/api/) for that purpose. Keep a note of the following methods: | Method name | Purpose | |-------------|---------| diff --git a/content/guides/hosting/iam/scim.md b/content/guides/hosting/iam/scim.md index 5c2f474d3..405ac5a1e 100644 --- a/content/guides/hosting/iam/scim.md +++ b/content/guides/hosting/iam/scim.md @@ -12,7 +12,7 @@ The System for Cross-domain Identity Management (SCIM) API allows instance or or The SCIM API is accessible at `/scim/` and supports the `/Users` and `/Groups` endpoints with a subset of the fields found in the [RC7643 protocol](https://www.rfc-editor.org/rfc/rfc7643). It additionally includes the `/Roles` endpoints which are not part of the official SCIM schema. W&B adds the `/Roles` endpoints to support automated management of custom roles in W&B organizations. {{% alert %}} -SCIM API applies to all hosting options including [Dedicated Cloud](../hosting-options/dedicated_cloud.md), [Self-managed instances](../hosting-options/self-managed.md), and [SaaS Cloud](../hosting-options/saas_cloud.md). In SaaS Cloud, the organization admin must configure the default organization in user settings to ensure that the SCIM API requests go to the right organization. The setting is available in the section `SCIM API Organization` within user settings. +SCIM API applies to all hosting options including [Dedicated Cloud](../hosting-options/dedicated_cloud/), [Self-managed instances](../hosting-options/self-managed/), and [SaaS Cloud](../hosting-options/saas_cloud/). In SaaS Cloud, the organization admin must configure the default organization in user settings to ensure that the SCIM API requests go to the right organization. The setting is available in the section `SCIM API Organization` within user settings. {{% /alert %}} ## Authentication @@ -27,7 +27,7 @@ The SCIM user resource maps to W&B users. - **Endpoint:** **`/scim/Users/{id}`** - **Method**: GET -- **Description**: Retrieve the information for a specific user in your [SaaS Cloud](../hosting-options/saas_cloud.md) organization or your [Dedicated Cloud](../hosting-options/dedicated_cloud.md) or [Self-managed](../hosting-options/self-managed.md) instance by providing the user's unique ID. +- **Description**: Retrieve the information for a specific user in your [SaaS Cloud](../hosting-options/saas_cloud/) organization or your [Dedicated Cloud](../hosting-options/dedicated_cloud/) or [Self-managed](../hosting-options/self-managed/) instance by providing the user's unique ID. - **Request Example**: ```bash @@ -68,7 +68,7 @@ GET /scim/Users/abc - **Endpoint:** **`/scim/Users`** - **Method**: GET -- **Description**: Retrieve the list of all users in your [SaaS Cloud](../hosting-options/saas_cloud.md) organization or your [Dedicated Cloud](../hosting-options/dedicated_cloud.md) or [Self-managed](../hosting-options/self-managed.md) instance. +- **Description**: Retrieve the list of all users in your [SaaS Cloud](../hosting-options/saas_cloud/) organization or your [Dedicated Cloud](../hosting-options/dedicated_cloud/) or [Self-managed](../hosting-options/self-managed/) instance. - **Request Example**: ```bash @@ -180,7 +180,7 @@ POST /scim/Users - **Endpoint**: **`/scim/Users/{id}`** - **Method**: DELETE -- **Description**: Fully delete a user from your [SaaS Cloud](../hosting-options/saas_cloud.md) organization or your [Dedicated Cloud](../hosting-options/dedicated_cloud.md) or [Self-managed](../hosting-options/self-managed.md) instance by providing the user's unique ID. Use the [Create user](#create-user) API to add the user again to the organization or instance if needed. +- **Description**: Fully delete a user from your [SaaS Cloud](../hosting-options/saas_cloud/) organization or your [Dedicated Cloud](../hosting-options/dedicated_cloud/) or [Self-managed](../hosting-options/self-managed/) instance by providing the user's unique ID. Use the [Create user](#create-user) API to add the user again to the organization or instance if needed. - **Request Example**: {{% alert %}} @@ -201,7 +201,7 @@ DELETE /scim/Users/abc - **Endpoint**: **`/scim/Users/{id}`** - **Method**: PATCH -- **Description**: Temporarily deactivate a user in your [Dedicated Cloud](../hosting-options/dedicated_cloud.md) or [Self-managed](../hosting-options/self-managed.md) instance by providing the user's unique ID. Use the [Reactivate user](#reactivate-user) API to reactivate the user when needed. +- **Description**: Temporarily deactivate a user in your [Dedicated Cloud](../hosting-options/dedicated_cloud/) or [Self-managed](../hosting-options/self-managed/) instance by providing the user's unique ID. Use the [Reactivate user](#reactivate-user) API to reactivate the user when needed. - **Supported Fields**: | Field | Type | Required | @@ -210,7 +210,7 @@ DELETE /scim/Users/abc | value | Object | Object `{"active": false}` indicating that the user should be deactivated. | {{% alert %}} -User deactivation and reactivation operations are not supported in [SaaS Cloud](../hosting-options/saas_cloud.md). +User deactivation and reactivation operations are not supported in [SaaS Cloud](../hosting-options/saas_cloud/). {{% /alert %}} - **Request Example**: @@ -266,7 +266,7 @@ This returns the User object. - **Endpoint**: **`/scim/Users/{id}`** - **Method**: PATCH -- **Description**: Reactivate a deactivated user in your [Dedicated Cloud](../hosting-options/dedicated_cloud.md) or [Self-managed](../hosting-options/self-managed.md) instance by providing the user's unique ID. +- **Description**: Reactivate a deactivated user in your [Dedicated Cloud](../hosting-options/dedicated_cloud/) or [Self-managed](../hosting-options/self-managed/) instance by providing the user's unique ID. - **Supported Fields**: | Field | Type | Required | @@ -275,7 +275,7 @@ This returns the User object. | value | Object | Object `{"active": true}` indicating that the user should be reactivated. | {{% alert %}} -User deactivation and reactivation operations are not supported in [SaaS Cloud](../hosting-options/saas_cloud.md). +User deactivation and reactivation operations are not supported in [SaaS Cloud](../hosting-options/saas_cloud/). {{% /alert %}} - **Request Example**: @@ -331,7 +331,7 @@ This returns the User object. - **Endpoint**: **`/scim/Users/{id}`** - **Method**: PATCH -- **Description**: Assign an organization-level role to a user. The role can be one of `admin`, `viewer` or `member` as described [here](./manage-organization.md#invite-a-user). For [SaaS Cloud](../hosting-options/saas_cloud.md), ensure that you have configured the correct organization for SCIM API in user settings. +- **Description**: Assign an organization-level role to a user. The role can be one of `admin`, `viewer` or `member` as described [here](./manage-organization.md#invite-a-user). For [SaaS Cloud](../hosting-options/saas_cloud/), ensure that you have configured the correct organization for SCIM API in user settings. - **Supported Fields**: | Field | Type | Required | @@ -400,7 +400,7 @@ This returns the User object. - **Endpoint**: **`/scim/Users/{id}`** - **Method**: PATCH -- **Description**: Assign a team-level role to a user. The role can be one of `admin`, `viewer`, `member` or a custom role as described [here](./manage-organization.md#assign-or-update-a-team-members-role). For [SaaS Cloud](../hosting-options/saas_cloud.md), ensure that you have configured the correct organization for SCIM API in user settings. +- **Description**: Assign a team-level role to a user. The role can be one of `admin`, `viewer`, `member` or a custom role as described [here](./manage-organization.md#assign-or-update-a-team-members-role). For [SaaS Cloud](../hosting-options/saas_cloud/), ensure that you have configured the correct organization for SCIM API in user settings. - **Supported Fields**: | Field | Type | Required | diff --git a/content/guides/hosting/monitoring-usage/audit-logging.md b/content/guides/hosting/monitoring-usage/audit-logging.md index 3a03cd712..c01b1314b 100644 --- a/content/guides/hosting/monitoring-usage/audit-logging.md +++ b/content/guides/hosting/monitoring-usage/audit-logging.md @@ -11,12 +11,12 @@ Use W&B audit logs to track user activity within your organization and to confor | W&B Platform Deployment type | Audit logs access mechanism | |----------------------------|--------------------------------| -| [Self-managed](../hosting-options/self-managed.md) | Synced to instance-level bucket every 10 minutes. Also available using [the API](#fetch-audit-logs-using-api). | -| [Dedicated Cloud](../hosting-options/dedicated_cloud.md) with [secure storage connector (BYOB)](../data-security/secure-storage-connector.md) | Synced to instance-level bucket (BYOB) every 10 minutes. Also available using [the API](#fetch-audit-logs-using-api). | -| [Dedicated Cloud](../hosting-options/dedicated_cloud.md) with W&B managed storage (without BYOB) | Only available using [the API](#fetch-audit-logs-using-api). | +| [Self-managed](../hosting-options/self-managed/) | Synced to instance-level bucket every 10 minutes. Also available using [the API](#fetch-audit-logs-using-api). | +| [Dedicated Cloud](../hosting-options/dedicated_cloud/) with [secure storage connector (BYOB)](../data-security/secure-storage-connector/) | Synced to instance-level bucket (BYOB) every 10 minutes. Also available using [the API](#fetch-audit-logs-using-api). | +| [Dedicated Cloud](../hosting-options/dedicated_cloud/) with W&B managed storage (without BYOB) | Only available using [the API](#fetch-audit-logs-using-api). | {{% alert %}} -Audit logs are not available for [SaaS Cloud](../hosting-options/saas_cloud.md). +Audit logs are not available for [SaaS Cloud](../hosting-options/saas_cloud/). {{% /alert %}} Once you've access to your audit logs, analyze those using your preferred tools, such as [Pandas](https://pandas.pydata.org/docs/index.html), [Amazon Redshift](https://aws.amazon.com/redshift/), [Google BigQuery](https://cloud.google.com/bigquery), [Microsoft Fabric](https://www.microsoft.com/en-us/microsoft-fabric), and more. You may need to transform the JSON-formatted audit logs into a format relevant to the tool before analysis. Information on how to transform your audit logs for specific tools is outside the scope of W&B documentation. @@ -25,7 +25,7 @@ Once you've access to your audit logs, analyze those using your preferred tools, **Audit Log Retention:** If a compliance, security or risk team in your organization requires audit logs to be retained for a specific period of time, W&B recommends to periodically transfer the logs from your instance-level bucket to a long-term retention storage. If you're instead using the API to access the audit logs, you can implement a simple script that runs periodically (like daily or every few days) to fetch any logs that may have been generated since the time of the last script run, and store those in a short-term storage for analysis or directly transfer to a long-term retention storage. {{% /alert %}} -HIPAA compliance requires that you retain audit logs for a minimum of 6 years. For HIPAA-compliant [Dedicated Cloud](../hosting-options/dedicated_cloud.md) instances with [BYOB](../data-security/secure-storage-connector.md), you must configure guardrails for your managed storage including any long-term retention storage, to ensure that no internal or external user can delete audit logs before the end of the mandatory retention period. +HIPAA compliance requires that you retain audit logs for a minimum of 6 years. For HIPAA-compliant [Dedicated Cloud](../hosting-options/dedicated_cloud/) instances with [BYOB](../data-security/secure-storage-connector/), you must configure guardrails for your managed storage including any long-term retention storage, to ensure that no internal or external user can delete audit logs before the end of the mandatory retention period. ## Audit log schema The following table lists all the different keys that might be present in your audit logs. Each log contains only the assets relevant to the corresponding action, and others are omitted from the log. diff --git a/content/guides/hosting/monitoring-usage/org_dashboard.md b/content/guides/hosting/monitoring-usage/org_dashboard.md index 289001408..9ab7c8163 100644 --- a/content/guides/hosting/monitoring-usage/org_dashboard.md +++ b/content/guides/hosting/monitoring-usage/org_dashboard.md @@ -7,7 +7,7 @@ title: View organization dashboard --- {{% alert color="secondary" %}} -Organization dashboard is only available with [Dedicated Cloud](../hosting-options/dedicated_cloud.md) and [Self-managed instances](../hosting-options/self-managed.md). +Organization dashboard is only available with [Dedicated Cloud](../hosting-options/dedicated_cloud/) and [Self-managed instances](../hosting-options/self-managed/). {{% /alert %}} diff --git a/content/guides/hosting/monitoring-usage/prometheus-logging.md b/content/guides/hosting/monitoring-usage/prometheus-logging.md index 8d799bbc0..4f553f98e 100644 --- a/content/guides/hosting/monitoring-usage/prometheus-logging.md +++ b/content/guides/hosting/monitoring-usage/prometheus-logging.md @@ -10,7 +10,7 @@ weight: 2 Use [Prometheus](https://prometheus.io/docs/introduction/overview/) with W&B Server. Prometheus installs are exposed as a [kubernetes ClusterIP service](https://github.com/wandb/terraform-kubernetes-wandb/blob/main/main.tf#L225). {{% alert color="secondary" %}} -Prometheus monitoring is only available with [Self-managed instances](../hosting-options/self-managed.md). +Prometheus monitoring is only available with [Self-managed instances](../hosting-options/self-managed/). {{% /alert %}} diff --git a/content/guides/hosting/privacy-settings.md b/content/guides/hosting/privacy-settings.md index d57cc6acc..3940c5923 100644 --- a/content/guides/hosting/privacy-settings.md +++ b/content/guides/hosting/privacy-settings.md @@ -22,7 +22,7 @@ Team admins can configure privacy settings for their respective teams from withi * Allow any team member to invite other members (not just admins) * Turn off public sharing to outside of team for reports in private projects. This turns off existing magic links. * Allow users with matching organization email domain to join this team. - * This setting is applicable only to [SaaS Cloud](./hosting-options/saas_cloud.md). It's not available in [Dedicated Cloud](./hosting-options/dedicated_cloud.md) or [Self-managed](./hosting-options/self-managed.md) instances. + * This setting is applicable only to [SaaS Cloud](./hosting-options/saas_cloud/). It's not available in [Dedicated Cloud](./hosting-options/dedicated_cloud/) or [Self-managed](./hosting-options/self-managed/) instances. * Enable code saving by default. ## Enforce privacy settings for all teams @@ -32,13 +32,13 @@ Organization admins can enforce privacy settings for all teams in their organiza * Enforce team visibility restrictions * Enable this option to hide all teams from non-members * Enforce privacy for future projects - * Enable this option to enforce all future projects in all teams to be private or [restricted](./iam/restricted-projects.md) + * Enable this option to enforce all future projects in all teams to be private or [restricted](./iam/restricted-projects/) * Enforce invitation control * Enable this option to prevent non-admins from inviting members to any team * Enforce report sharing control * Enable this option to turn off public sharing of reports in private projects and deactivate existing magic links * Enforce team self joining restrictions * Enable this option to restrict users with matching organization email domain from self-joining any team - * This setting is applicable only to [SaaS Cloud](./hosting-options/saas_cloud.md). It's not available in [Dedicated Cloud](./hosting-options/dedicated_cloud.md) or [Self-managed](./hosting-options/self-managed.md) instances. + * This setting is applicable only to [SaaS Cloud](./hosting-options/saas_cloud/). It's not available in [Dedicated Cloud](./hosting-options/dedicated_cloud/) or [Self-managed](./hosting-options/self-managed/) instances. * Enforce default code saving restrictions * Enable this option to turn off code saving by default for all teams \ No newline at end of file diff --git a/content/guides/hosting/server-release-process.md b/content/guides/hosting/server-release-process.md index cc8617af7..dc3f0d3c6 100644 --- a/content/guides/hosting/server-release-process.md +++ b/content/guides/hosting/server-release-process.md @@ -16,7 +16,7 @@ W&B Server releases apply to the **Dedicated Cloud** and **Self-managed** deploy | Patch | Patch releases include critical and high severity bug fixes. Patches are only rarely released, as needed. | | Feature | The feature release targets a specific release date for a new product feature, which occasionally happens before the standard monthly release. | -All releases are immediately deployed to all **Dedicated Cloud** instances once the acceptance testing phase is complete. It keeps those managed instances fully updated, making the latest features and fixes available to relevant customers. Customers with **Self-managed** instances are responsible for the [update process](./server-upgrade-process.md) on their own schedule, where they can use [the latest Docker image](https://hub.docker.com/r/wandb/local). Refer to [release support and end of life](#release-support-and-end-of-life-policy). +All releases are immediately deployed to all **Dedicated Cloud** instances once the acceptance testing phase is complete. It keeps those managed instances fully updated, making the latest features and fixes available to relevant customers. Customers with **Self-managed** instances are responsible for the [update process](./server-upgrade-process/) on their own schedule, where they can use [the latest Docker image](https://hub.docker.com/r/wandb/local). Refer to [release support and end of life](#release-support-and-end-of-life-policy). {{% alert %}} - Some advanced features are available only with the enterprise license. So even if you get the latest Docker image but don't have an enterprise license, you would not be able to take advantage of the relevant advanced capabilities. diff --git a/content/guides/integrations/_index.md b/content/guides/integrations/_index.md index 57a7ab5b5..5e6d991e2 100644 --- a/content/guides/integrations/_index.md +++ b/content/guides/integrations/_index.md @@ -9,7 +9,7 @@ cascade: - url: guides/integrations/:filename --- -W&B integrations make it fast and easy to set up experiment tracking and data versioning inside existing projects. Check out integrations for ML frameworks such as [PyTorch](pytorch.md), ML libraries such as [Hugging Face](huggingface.md), or cloud services such as [Amazon SageMaker](other/sagemaker.md). +W&B integrations make it fast and easy to set up experiment tracking and data versioning inside existing projects. Check out integrations for ML frameworks such as [PyTorch](pytorch/), ML libraries such as [Hugging Face](huggingface/), or cloud services such as [Amazon SageMaker](other/sagemaker/). diff --git a/content/guides/integrations/add-wandb-to-any-library.md b/content/guides/integrations/add-wandb-to-any-library.md index a605df273..389876a62 100644 --- a/content/guides/integrations/add-wandb-to-any-library.md +++ b/content/guides/integrations/add-wandb-to-any-library.md @@ -98,7 +98,7 @@ wandb.login() {{% /tab %}} {{% tab header="Environment Variable" value="environment" %}} -Set a [W&B environment variable](../track/environment-variables.md) for the API key: +Set a [W&B environment variable](../track/environment-variables/) for the API key: ```bash export WANDB_API_KEY=$YOUR_API_KEY @@ -319,7 +319,7 @@ For frameworks supporting distributed environments, you can adapt any of the fol * Detect which is the "main" process and only use `wandb` there. Any required data coming from other processes must be routed to the main process first. (This workflow is encouraged). * Call `wandb` in every process and auto-group them by giving them all the same unique `group` name. -See [Log Distributed Training Experiments](../track/log/distributed-training.md) for more details. +See [Log Distributed Training Experiments](../track/log/distributed-training/) for more details. ### Logging Model Checkpoints And More diff --git a/content/guides/integrations/farama-gymnasium.md b/content/guides/integrations/farama-gymnasium.md index 7f6a7c2ec..be45fd7b0 100644 --- a/content/guides/integrations/farama-gymnasium.md +++ b/content/guides/integrations/farama-gymnasium.md @@ -8,9 +8,9 @@ title: Farama Gymnasium weight: 90 --- -If you're using [Farama Gymnasium](https://gymnasium.farama.org/#) we will automatically log videos of your environment generated by `gymnasium.wrappers.Monitor`. Just set the `monitor_gym` keyword argument to [`wandb.init`](../../../ref/python/init.md) to `True`. +If you're using [Farama Gymnasium](https://gymnasium.farama.org/#) we will automatically log videos of your environment generated by `gymnasium.wrappers.Monitor`. Just set the `monitor_gym` keyword argument to [`wandb.init`](../../../ref/python/init/) to `True`. -Our gymnasium integration is very light. We simply [look at the name of the video file](https://github.com/wandb/wandb/blob/c5fe3d56b155655980611d32ef09df35cd336872/wandb/integration/gym/__init__.py#LL69C67-L69C67) being logged from `gymnasium` and name it after that or fall back to `"videos"` if we don't find a match. If you want more control, you can always just manually [log a video](../../track/log/media.md). +Our gymnasium integration is very light. We simply [look at the name of the video file](https://github.com/wandb/wandb/blob/c5fe3d56b155655980611d32ef09df35cd336872/wandb/integration/gym/__init__.py#LL69C67-L69C67) being logged from `gymnasium` and name it after that or fall back to `"videos"` if we don't find a match. If you want more control, you can always just manually [log a video](../../track/log/media/). Check out this [report](https://wandb.ai/raph-test/cleanrltest/reports/Mario-Bros-but-with-AI-Gymnasium-and-CleanRL---Vmlldzo0NTcxNTcw) to learn more on how to use Gymnasium with the CleanRL library. diff --git a/content/guides/integrations/fastai/_index.md b/content/guides/integrations/fastai/_index.md index 96715946c..a7a2c9f7d 100644 --- a/content/guides/integrations/fastai/_index.md +++ b/content/guides/integrations/fastai/_index.md @@ -51,7 +51,7 @@ If you're using **fastai** to train your models, W&B has an easy integration usi ``` {{% alert %}} -If you use version 1 of Fastai, refer to the [Fastai v1 docs](v1.md). +If you use version 1 of Fastai, refer to the [Fastai v1 docs](v1/). {{% /alert %}} ## WandbCallback Arguments diff --git a/content/guides/integrations/fastai/v1.md b/content/guides/integrations/fastai/v1.md index adf76f77a..dfc919a19 100644 --- a/content/guides/integrations/fastai/v1.md +++ b/content/guides/integrations/fastai/v1.md @@ -8,7 +8,7 @@ title: fastai v1 {{% alert %}} This documentation is for fastai v1. -If you use the current version of fastai, you should refer to [fastai page](../intro.md). +If you use the current version of fastai, you should refer to [fastai page](../intro/). {{% /alert %}} For scripts using fastai v1, we have a callback that can automatically log model topology, losses, metrics, weights, gradients, sample predictions and best trained model. diff --git a/content/guides/integrations/huggingface.md b/content/guides/integrations/huggingface.md index 63075ecca..a59db0465 100644 --- a/content/guides/integrations/huggingface.md +++ b/content/guides/integrations/huggingface.md @@ -37,7 +37,7 @@ If you'd rather dive straight into working code, check out this [Google Colab](h 3. To log in with your training script, you'll need to sign in to you account at www.wandb.ai, then **you will find your API key on the** [**Authorize page**](https://wandb.ai/authorize)**.** -If you are using Weights and Biases for the first time you might want to check out our [**quickstart**](../../quickstart.md) +If you are using Weights and Biases for the first time you might want to check out our [**quickstart**](../../quickstart/) {{< tabpane text=true >}} @@ -151,7 +151,7 @@ Using TensorFlow? Just swap the PyTorch `Trainer` for the TensorFlow `TFTrainer` ### 4. Turn on model checkpointing -Using Weights & Biases' [Artifacts](../artifacts/intro.md), you can store up to 100GB of models and datasets for free and then use the Weights & Biases [Model Registry](../model_registry/intro.md) to register models to prepare them for staging or deployment in your production environment. +Using Weights & Biases' [Artifacts](../artifacts/intro/), you can store up to 100GB of models and datasets for free and then use the Weights & Biases [Model Registry](../model_registry/intro/) to register models to prepare them for staging or deployment in your production environment. Logging your Hugging Face model checkpoints to Artifacts can be done by setting the `WANDB_LOG_MODEL` environment variable to one of `end` or `checkpoint` or `false`: @@ -200,9 +200,9 @@ However, If you pass a [`run_name`](https://huggingface.co/docs/transformers/mai {{% /alert %}} #### W&B Model Registry -Once you have logged your checkpoints to Artifacts, you can then register your best model checkpoints and centralize them across your team using the Weights & Biases **[Model Registry](../model_registry/intro.md)**. Here you can organize your best models by task, manage model lifecycle, facilitate easy tracking and auditing throughout the ML lifecyle, and [automate](/guides/artifacts/project-scoped-automations/#create-a-webhook-automation) downstream actions with webhooks or jobs. +Once you have logged your checkpoints to Artifacts, you can then register your best model checkpoints and centralize them across your team using the Weights & Biases **[Model Registry](../model_registry/intro/)**. Here you can organize your best models by task, manage model lifecycle, facilitate easy tracking and auditing throughout the ML lifecyle, and [automate](/guides/artifacts/project-scoped-automations/#create-a-webhook-automation) downstream actions with webhooks or jobs. -See the [Model Registry](../model_registry/intro.md) documentation for how to link a model Artifact to the Model Registry. +See the [Model Registry](../model_registry/intro/) documentation for how to link a model Artifact to the Model Registry. ### 5. Visualise evaluation outputs during training @@ -231,14 +231,14 @@ wandb.finish() ### 7. Visualize your results -Once you have logged your training results you can explore your results dynamically in the [W&B Dashboard](../track/workspaces.md). It's easy to compare across dozens of runs at once, zoom in on interesting findings, and coax insights out of complex data with flexible, interactive visualizations. +Once you have logged your training results you can explore your results dynamically in the [W&B Dashboard](../track/workspaces/). It's easy to compare across dozens of runs at once, zoom in on interesting findings, and coax insights out of complex data with flexible, interactive visualizations. ## Advanced features and FAQs ### How do I save the best model? If `load_best_model_at_end=True` is set in the `TrainingArguments` that are passed to the `Trainer`, then W&B will save the best performing model checkpoint to Artifacts. -If you'd like to centralize all your best model versions across your team to organize them by ML task, stage them for production, bookmark them for further evaluation, or kick off downstream Model CI/CD processes then ensure you're saving your model checkpoints to Artifacts. Once logged to Artifacts, these checkpoints can then be promoted to the [Model Registry](../model_registry/intro.md). +If you'd like to centralize all your best model versions across your team to organize them by ML task, stage them for production, bookmark them for further evaluation, or kick off downstream Model CI/CD processes then ensure you're saving your model checkpoints to Artifacts. Once logged to Artifacts, these checkpoints can then be promoted to the [Model Registry](../model_registry/intro/). ### How do I load a saved model? @@ -458,7 +458,7 @@ WANDB_SILENT=true The `WandbCallback` that `Trainer` uses will call `wandb.init` under the hood when `Trainer` is initialized. You can alternatively set up your runs manually by calling `wandb.init` before the`Trainer` is initialized. This gives you full control over your W&B run configuration. -An example of what you might want to pass to `init` is below. For more details on how to use `wandb.init`, [check out the reference documentation](../../ref/python/init.md). +An example of what you might want to pass to `init` is below. For more details on how to use `wandb.init`, [check out the reference documentation](../../ref/python/init/). ```python wandb.init( diff --git a/content/guides/integrations/hydra.md b/content/guides/integrations/hydra.md index a9bd826e5..f19fc0b18 100644 --- a/content/guides/integrations/hydra.md +++ b/content/guides/integrations/hydra.md @@ -43,7 +43,7 @@ def run_experiment(cfg): ## Troubleshoot multiprocessing -If your process hangs when started, this may be caused by [this known issue](../../track/log/distributed-training.md). To solve this, try to changing wandb's multiprocessing protocol either by adding an extra settings parameter to \`wandb.init\` as: +If your process hangs when started, this may be caused by [this known issue](../../track/log/distributed-training/). To solve this, try to changing wandb's multiprocessing protocol either by adding an extra settings parameter to \`wandb.init\` as: ```python wandb.init(settings=wandb.Settings(start_method="thread")) @@ -57,7 +57,7 @@ $ export WANDB_START_METHOD=thread ## Optimize Hyperparameters -[W&B Sweeps](../../sweeps/intro.md) is a highly scalable hyperparameter search platform, which provides interesting insights and visualization about W&B experiments with minimal requirements code real-estate. Sweeps integrates seamlessly with Hydra projects with no-coding requirements. The only thing needed is a configuration file describing the various parameters to sweep over as normal. +[W&B Sweeps](../../sweeps/intro/) is a highly scalable hyperparameter search platform, which provides interesting insights and visualization about W&B experiments with minimal requirements code real-estate. Sweeps integrates seamlessly with Hydra projects with no-coding requirements. The only thing needed is a configuration file describing the various parameters to sweep over as normal. A simple example `sweep.yaml` file would be: diff --git a/content/guides/integrations/keras.md b/content/guides/integrations/keras.md index 33039ede7..824ea016c 100644 --- a/content/guides/integrations/keras.md +++ b/content/guides/integrations/keras.md @@ -258,7 +258,7 @@ See our [example repo](https://github.com/wandb/examples) for scripts, including The `WandbCallback` class supports a wide variety of logging configuration options: specifying a metric to monitor, tracking of weights and gradients, logging of predictions on training_data and validation_data, and more. -Check out [the reference documentation for the `keras.WandbCallback`](../../ref/python/integrations/keras/wandbcallback.md) for full details. +Check out [the reference documentation for the `keras.WandbCallback`](../../ref/python/integrations/keras/wandbcallback/) for full details. The `WandbCallback` diff --git a/content/guides/integrations/kubeflow-pipelines-kfp.md b/content/guides/integrations/kubeflow-pipelines-kfp.md index f5eb9f713..20b37b313 100644 --- a/content/guides/integrations/kubeflow-pipelines-kfp.md +++ b/content/guides/integrations/kubeflow-pipelines-kfp.md @@ -63,7 +63,7 @@ add = components.create_component_from_func(add) ### Pass environment variables to containers -You may need to explicitly pass [environment variables](../../track/environment-variables.md) to your containers. For two-way linking, you should also set the environment variables `WANDB_KUBEFLOW_URL` to the base URL of your Kubeflow Pipelines instance. For example, `https://kubeflow.mysite.com`. +You may need to explicitly pass [environment variables](../../track/environment-variables/) to your containers. For two-way linking, you should also set the environment variables `WANDB_KUBEFLOW_URL` to the base URL of your Kubeflow Pipelines instance. For example, `https://kubeflow.mysite.com`. ```python import os diff --git a/content/guides/integrations/langchain.md b/content/guides/integrations/langchain.md index 5550261d8..2d78f8dd4 100644 --- a/content/guides/integrations/langchain.md +++ b/content/guides/integrations/langchain.md @@ -10,6 +10,6 @@ weight: 180 ## Using LangChain in Weights & Biases -To use the Weights & Biases LangChain integration please see our [W&B Prompts Quickstart](./prompts/quickstart.md) +To use the Weights & Biases LangChain integration please see our [W&B Prompts Quickstart](./prompts/quickstart/) Or try our [Google Colab Jupyter notebook](http://wandb.me/prompts-quickstart) \ No newline at end of file diff --git a/content/guides/integrations/lightgbm.md b/content/guides/integrations/lightgbm.md index 159e29a2e..2bd3be48f 100644 --- a/content/guides/integrations/lightgbm.md +++ b/content/guides/integrations/lightgbm.md @@ -29,7 +29,7 @@ Looking for working code examples? Check out [our repository of examples on GitH ## Tuning your hyperparameters with Sweeps -Attaining the maximum performance out of models requires tuning hyperparameters, like tree depth and learning rate. Weights & Biases includes [Sweeps](../sweeps/intro.md), a powerful toolkit for configuring, orchestrating, and analyzing large hyperparameter testing experiments. +Attaining the maximum performance out of models requires tuning hyperparameters, like tree depth and learning rate. Weights & Biases includes [Sweeps](../sweeps/intro/), a powerful toolkit for configuring, orchestrating, and analyzing large hyperparameter testing experiments. To learn more about these tools and see an example of how to use Sweeps with XGBoost, check out this interactive Colab notebook. diff --git a/content/guides/integrations/lightning.md b/content/guides/integrations/lightning.md index 4c7ea1c9d..6916c4a8c 100644 --- a/content/guides/integrations/lightning.md +++ b/content/guides/integrations/lightning.md @@ -63,7 +63,7 @@ fabric.log_dict({"important_metric": important_metric}) 3. In your browser, find your API key on the [Authorize page](https://wandb.ai/authorize). -4. If you are using Weights and Biases for the first time you might want to check out our [**quickstart**](../../quickstart.md) +4. If you are using Weights and Biases for the first time you might want to check out our [**quickstart**](../../quickstart/) {{< tabpane text=true >}} {{% tab header="Command Line" value="cli" %}} @@ -617,7 +617,7 @@ The core integration is based on the [Lightning `loggers` API](https://pytorch-l ### What does the integration log without any additional code? -We'll save your model checkpoints to W&B, where you can view them or download them for use in future runs. We'll also capture [system metrics](../app/features/system-metrics.md), like GPU usage and network I/O, environment information, like hardware and OS information, [code state](../app/features/panels/code.md) (including git commit and diff patch, notebook contents and session history), and anything printed to the standard out. +We'll save your model checkpoints to W&B, where you can view them or download them for use in future runs. We'll also capture [system metrics](../app/features/system-metrics/), like GPU usage and network I/O, environment information, like hardware and OS information, [code state](../app/features/panels/code/) (including git commit and diff patch, notebook contents and session history), and anything printed to the standard out. ### What if I need to use `wandb.run` in my training setup? diff --git a/content/guides/integrations/metaflow.md b/content/guides/integrations/metaflow.md index 33baed5e6..a08778129 100644 --- a/content/guides/integrations/metaflow.md +++ b/content/guides/integrations/metaflow.md @@ -125,7 +125,7 @@ class WandbExampleFlow(FlowSpec): ## Access your data programmatically -You can access the information we've captured in three ways: inside the original Python process being logged using the [`wandb` client library](../../../ref/python/README.md), with the [web app UI](../../track/workspaces.md), or programmatically using [our Public API](../../../ref/python/public-api/README.md). `Parameter`s are saved to W&B's [`config`](../../track/config.md) and can be found in the [Overview tab](../../runs/intro.md#overview-tab). `datasets`, `models`, and `others` are saved to [W&B Artifacts](../../artifacts/intro.md) and can be found in the [Artifacts tab](../../runs/intro.md#artifacts-tab). Base python types are saved to W&B's [`summary`](../../track/log/intro.md) dict and can be found in the Overview tab. See our [guide to the Public API](../../track/public-api-guide.md) for details on using the API to get this information programmatically from outside . +You can access the information we've captured in three ways: inside the original Python process being logged using the [`wandb` client library](../../../ref/python/README/), with the [web app UI](../../track/workspaces/), or programmatically using [our Public API](../../../ref/python/public-api/README/). `Parameter`s are saved to W&B's [`config`](../../track/config/) and can be found in the [Overview tab](../../runs/intro.md#overview-tab). `datasets`, `models`, and `others` are saved to [W&B Artifacts](../../artifacts/intro/) and can be found in the [Artifacts tab](../../runs/intro.md#artifacts-tab). Base python types are saved to W&B's [`summary`](../../track/log/intro/) dict and can be found in the Overview tab. See our [guide to the Public API](../../track/public-api-guide/) for details on using the API to get this information programmatically from outside . ### Cheat sheet diff --git a/content/guides/integrations/openai-api.md b/content/guides/integrations/openai-api.md index 222c6d05d..7d186b6b4 100644 --- a/content/guides/integrations/openai-api.md +++ b/content/guides/integrations/openai-api.md @@ -13,7 +13,7 @@ Use the W&B OpenAI API integration to log requests, responses, token counts and {{% alert %}} -See the [OpenAI fine-tuning integration](./openai-fine-tuning.md) to learn how to use W&B to track your fine-tuning experiments, models, and datasets and share your results with your colleagues. +See the [OpenAI fine-tuning integration](./openai-fine-tuning/) to learn how to use W&B to track your fine-tuning experiments, models, and datasets and share your results with your colleagues. {{% /alert %}} Log your API inputs and outputs you can quickly evaluate the performance of difference prompts, compare different model settings (such as temperature), and track other usage metrics such as token usage. @@ -46,7 +46,7 @@ from wandb.integration.openai import autolog autolog({"project": "gpt5"}) ``` -You can optionally pass a dictionary with argument that `wandb.init()` accepts to `autolog`. This includes a project name, team name, entity, and more. For more information about [`wandb.init`](../../../ref/python/init.md), see the API Reference Guide. +You can optionally pass a dictionary with argument that `wandb.init()` accepts to `autolog`. This includes a project name, team name, entity, and more. For more information about [`wandb.init`](../../../ref/python/init/), see the API Reference Guide. ### 2. Call the OpenAI API Each call you make to the OpenAI API is now logged to W&B automatically. @@ -68,7 +68,7 @@ response = openai.ChatCompletion.create(**chat_request_kwargs) ### 3. View your OpenAI API inputs and responses -Click on the W&B [run](../../runs/intro.md) link generated by `autolog` in **step 1**. This redirects you to your project workspace in the W&B App. +Click on the W&B [run](../../runs/intro/) link generated by `autolog` in **step 1**. This redirects you to your project workspace in the W&B App. Select a run you created to view the trace table, trace timeline and the model architecture of the OpenAI LLM used. diff --git a/content/guides/integrations/openai-fine-tuning.md b/content/guides/integrations/openai-fine-tuning.md index 3cee15324..64dcf850c 100644 --- a/content/guides/integrations/openai-fine-tuning.md +++ b/content/guides/integrations/openai-fine-tuning.md @@ -86,7 +86,7 @@ WandbLogger.sync( | wait_for_job_success | Once an OpenAI fine-tuning job is started it usually takes a bit of time. To ensure that your metrics are logged to W&B as soon as the fine-tune job is finished, this setting will check every 60 seconds for the status of the fine-tune job to change to `succeeded`. Once the fine-tune job is detected as being successful, the metrics will be synced automatically to W&B. Set to True by default. | | model_artifact_name | The name of the model artifact that is logged. Defaults to `"model-metadata"`. | | model_artifact_type | The type of the model artifact that is logged. Defaults to `"model"`. | -| \*\*kwargs_wandb_init | Aany additional argument passed directly to [`wandb.init()`](../../../ref/python/init.md) | +| \*\*kwargs_wandb_init | Aany additional argument passed directly to [`wandb.init()`](../../../ref/python/init/) | ## Dataset Versioning and Visualization @@ -107,7 +107,7 @@ The datasets are visualized as W&B Tables, which allows you to explore, search, OpenAI gives you an id of the fine-tuned model. Since we don't have access to the model weights, the `WandbLogger` creates a `model_metadata.json` file with all the details (hyperparameters, data file ids, etc.) of the model along with the `fine_tuned_model`` id and is logged as a W&B Artifact. -This model (metadata) artifact can further be linked to a model in the [W&B Model Registry](../../model_registry/intro.md). +This model (metadata) artifact can further be linked to a model in the [W&B Model Registry](../../model_registry/intro/). {{< img src="/images/integrations/openai_model_metadata.png" alt="" >}} diff --git a/content/guides/integrations/openai-gym.md b/content/guides/integrations/openai-gym.md index 16723d839..baa664eda 100644 --- a/content/guides/integrations/openai-gym.md +++ b/content/guides/integrations/openai-gym.md @@ -14,9 +14,9 @@ weight: 260 Since Gym is no longer an actively maintained project, try out our integration with Gymnasium. {{% /alert %}} -If you're using [OpenAI Gym](https://github.com/openai/gym), Weights & Biases automatically logs videos of your environment generated by `gym.wrappers.Monitor`. Just set the `monitor_gym` keyword argument to [`wandb.init`](../../../ref/python/init.md) to `True` or call `wandb.gym.monitor()`. +If you're using [OpenAI Gym](https://github.com/openai/gym), Weights & Biases automatically logs videos of your environment generated by `gym.wrappers.Monitor`. Just set the `monitor_gym` keyword argument to [`wandb.init`](../../../ref/python/init/) to `True` or call `wandb.gym.monitor()`. -Our gym integration is very light. We simply [look at the name of the video file](https://github.com/wandb/wandb/blob/master/wandb/integration/gym/__init__.py#L15) being logged from `gym` and name it after that or fall back to `"videos"` if we don't find a match. If you want more control, you can always just manually [log a video](../../track/log/media.md). +Our gym integration is very light. We simply [look at the name of the video file](https://github.com/wandb/wandb/blob/master/wandb/integration/gym/__init__.py#L15) being logged from `gym` and name it after that or fall back to `"videos"` if we don't find a match. If you want more control, you can always just manually [log a video](../../track/log/media/). The [OpenRL Benchmark](http://wandb.me/openrl-benchmark-report) by[ CleanRL](https://github.com/vwxyzjn/cleanrl) uses this integration for its OpenAI Gym examples. You can find source code (including [the specific code used for specific runs](https://wandb.ai/cleanrl/cleanrl.benchmark/runs/2jrqfugg/code?workspace=user-costa-huang)) that demonstrates how to use gym with diff --git a/content/guides/integrations/prodigy.md b/content/guides/integrations/prodigy.md index e3bb6f654..8e6e99c32 100644 --- a/content/guides/integrations/prodigy.md +++ b/content/guides/integrations/prodigy.md @@ -8,7 +8,7 @@ title: Prodigy weight: 290 --- -[Prodigy](https://prodi.gy/) is an annotation tool for creating training and evaluation data for machine learning models, error analysis, data inspection & cleaning. [W&B Tables](../../tables/tables-walkthrough.md) allow you to log, visualize, analyze, and share datasets (and more!) inside W&B. +[Prodigy](https://prodi.gy/) is an annotation tool for creating training and evaluation data for machine learning models, error analysis, data inspection & cleaning. [W&B Tables](../../tables/tables-walkthrough/) allow you to log, visualize, analyze, and share datasets (and more!) inside W&B. The [W&B integration with Prodigy](https://github.com/wandb/wandb/blob/master/wandb/integration/prodigy/prodigy.py) adds simple and easy-to-use functionality to upload your Prodigy-annotated dataset directly to W&B for use with Tables. diff --git a/content/guides/integrations/pytorch.md b/content/guides/integrations/pytorch.md index 5dcc51fb0..938db62d6 100644 --- a/content/guides/integrations/pytorch.md +++ b/content/guides/integrations/pytorch.md @@ -18,7 +18,7 @@ You can also see our [example repo](https://github.com/wandb/examples) for scrip ## Log gradients with `wandb.watch` -To automatically log gradients, you can call [`wandb.watch`](../../ref/python/watch.md) and pass in your PyTorch model. +To automatically log gradients, you can call [`wandb.watch`](../../ref/python/watch/) and pass in your PyTorch model. ```python import wandb @@ -40,7 +40,7 @@ for batch_idx, (data, target) in enumerate(train_loader): wandb.log({"loss": loss}) ``` -If you need to track multiple models in the same script, you can call `wandb.watch` on each model separately. Reference documentation for this function is [here](../../ref/python/watch.md). +If you need to track multiple models in the same script, you can call `wandb.watch` on each model separately. Reference documentation for this function is [here](../../ref/python/watch/). {{% alert color="secondary" %}} Gradients, metrics, and the graph won't be logged until `wandb.log` is called after a forward _and_ backward pass. @@ -48,14 +48,14 @@ Gradients, metrics, and the graph won't be logged until `wandb.log` is called af ## Log images and media -You can pass PyTorch `Tensors` with image data into [`wandb.Image`](../../ref/python/data-types/image.md) and utilities from [`torchvision`](https://pytorch.org/vision/stable/index.html) will be used to convert them to images automatically: +You can pass PyTorch `Tensors` with image data into [`wandb.Image`](../../ref/python/data-types/image/) and utilities from [`torchvision`](https://pytorch.org/vision/stable/index.html) will be used to convert them to images automatically: ```python images_t = ... # generate or load images as PyTorch Tensors wandb.log({"examples": [wandb.Image(im) for im in images_t]}) ``` -For more on logging rich media to W&B in PyTorch and other frameworks, check out our [media logging guide](../track/log/media.md). +For more on logging rich media to W&B in PyTorch and other frameworks, check out our [media logging guide](../track/log/media/). If you also want to include information alongside media, like your model's predictions or derived metrics, use a `wandb.Table`. @@ -72,13 +72,13 @@ wandb.log({"mnist_predictions": my_table}) {{< img src="/images/integrations/pytorch_example_table.png" alt="The code above generates a table like this one. This model's looking good!" >}} -For more on logging and visualizing datasets and models, check out our [guide to W&B Tables](../tables/intro.md). +For more on logging and visualizing datasets and models, check out our [guide to W&B Tables](../tables/intro/). ## Profile PyTorch code {{< img src="/images/integrations/pytorch_example_dashboard.png" alt="View detailed traces of PyTorch code execution inside W&B dashboards." >}} -W&B integrates directly with [PyTorch Kineto](https://github.com/pytorch/kineto)'s [Tensorboard plugin](https://github.com/pytorch/kineto/blob/master/tb_plugin/README.md) to provide tools for profiling PyTorch code, inspecting the details of CPU and GPU communication, and identifying bottlenecks and optimizations. +W&B integrates directly with [PyTorch Kineto](https://github.com/pytorch/kineto)'s [Tensorboard plugin](https://github.com/pytorch/kineto/blob/master/tb_plugin/README/) to provide tools for profiling PyTorch code, inspecting the details of CPU and GPU communication, and identifying bottlenecks and optimizations. ```python profile_dir = "path/to/run/tbprofile/" diff --git a/content/guides/integrations/scikit.md b/content/guides/integrations/scikit.md index 2818551a7..549c432c4 100644 --- a/content/guides/integrations/scikit.md +++ b/content/guides/integrations/scikit.md @@ -20,7 +20,7 @@ To get started: 3. Find your API key on the [Authorize page](https://wandb.ai/authorize). -4. If you are using Weights and Biases for the first time,check out a [quickstart](../../quickstart.md) +4. If you are using Weights and Biases for the first time,check out a [quickstart](../../quickstart/) {{< tabpane text=true >}} {{% tab header="Command Line" value="cli" %}} diff --git a/content/guides/integrations/spacy.md b/content/guides/integrations/spacy.md index 901e447b6..26c603b6c 100644 --- a/content/guides/integrations/spacy.md +++ b/content/guides/integrations/spacy.md @@ -52,7 +52,7 @@ model_log_interval = 1000 | ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `project_name` | `str`. The name of the W&B Project. The project will be created automatically if it doesn’t exist yet. | | `remove_config_values` | `List[str]` . A list of values to exclude from the config before it is uploaded to W&B. `[]` by default. | -| `model_log_interval` | `Optional int`. `None` by default. If set, [model versioning](../model_registry/intro.md) with [Artifacts](../artifacts/intro.md)will be enabled. Pass in the number of steps to wait between logging model checkpoints. `None` by default. | +| `model_log_interval` | `Optional int`. `None` by default. If set, [model versioning](../model_registry/intro/) with [Artifacts](../artifacts/intro/)will be enabled. Pass in the number of steps to wait between logging model checkpoints. `None` by default. | | `log_dataset_dir` | `Optional str`. If passed a path, the dataset will be uploaded as an Artifact at the beginning of training. `None` by default. | | `entity` | `Optional str` . If passed, the run will be created in the specified entity | | `run_name` | `Optional str` . If specified, the run will be created with the specified name. | @@ -89,4 +89,4 @@ python -m spacy train \ {{< /tabpane >}} -When training begins, a link to your training run's [W&B page](../runs/intro.md) will be output which will take you to this run's experiment tracking [dashboard](../track/workspaces.md) in the Weights & Biases web UI. \ No newline at end of file +When training begins, a link to your training run's [W&B page](../runs/intro/) will be output which will take you to this run's experiment tracking [dashboard](../track/workspaces/) in the Weights & Biases web UI. \ No newline at end of file diff --git a/content/guides/integrations/torchtune.md b/content/guides/integrations/torchtune.md index e9f065d41..1b15c4279 100644 --- a/content/guides/integrations/torchtune.md +++ b/content/guides/integrations/torchtune.md @@ -48,7 +48,7 @@ log_every_n_steps: 5 Enable W&B logging on the recipe's config file by modifying the `metric_logger` section. Change the `_component_` to `torchtune.utils.metric_logging.WandBLogger` class. You can also pass a `project` name and `log_every_n_steps` to customize the logging behavior. -You can also pass any other `kwargs` as you would to the [wandb.init](../../ref/python/init.md) method. For example, if you are working on a team, you can pass the `entity` argument to the `WandBLogger` class to specify the team name. +You can also pass any other `kwargs` as you would to the [wandb.init](../../ref/python/init/) method. For example, if you are working on a team, you can pass the `entity` argument to the `WandBLogger` class to specify the team name. {{< tabpane text=true >}} {{% tab header="Recipe's Config" value="config" %}} @@ -123,7 +123,7 @@ This is a fast evolving library, the current metrics are subject to change. If y The torchtune library supports various [checkpoint formats](https://pytorch.org/torchtune/stable/deep_dives/checkpointer.html). Depending on the origin of the model you are using, you should switch to the appropriate [checkpointer class](https://pytorch.org/torchtune/stable/deep_dives/checkpointer.html). -If you want to save the model checkpoints to [W&B Artifacts](../artifacts/intro.md), the simplest solution is to override the `save_checkpoint` functions inside the corresponding recipe. +If you want to save the model checkpoints to [W&B Artifacts](../artifacts/intro/), the simplest solution is to override the `save_checkpoint` functions inside the corresponding recipe. Here is an example of how you can override the `save_checkpoint` function to save the model checkpoints to W&B Artifacts. diff --git a/content/guides/integrations/ultralytics.md b/content/guides/integrations/ultralytics.md index 0c6dbabde..1f8b37cd9 100644 --- a/content/guides/integrations/ultralytics.md +++ b/content/guides/integrations/ultralytics.md @@ -53,7 +53,7 @@ from wandb.integration.ultralytics import add_wandb_callback from ultralytics import YOLO ``` -Initialize the `YOLO` model of your choice, and invoke the `add_wandb_callback` function on it before performing inference with the model. This ensures that when you perform training, fine-tuning, validation, or inference, it automatically saves the experiment logs and the images, overlaid with both ground-truth and the respective prediction results using the [interactive overlays for computer vision tasks](../track/log/media#image-overlays-in-tables) on W&B along with additional insights in a [`wandb.Table`](../tables/intro.md). +Initialize the `YOLO` model of your choice, and invoke the `add_wandb_callback` function on it before performing inference with the model. This ensures that when you perform training, fine-tuning, validation, or inference, it automatically saves the experiment logs and the images, overlaid with both ground-truth and the respective prediction results using the [interactive overlays for computer vision tasks](../track/log/media#image-overlays-in-tables) on W&B along with additional insights in a [`wandb.Table`](../tables/intro/). ```python # Initialize YOLO Model @@ -76,7 +76,7 @@ Here's how experiments tracked using W&B for an Ultralytics training or fine-tun
YOLO Fine-tuning Experiments
-Here's how epoch-wise validation results are visualized using a [W&B Table](../tables/intro.md): +Here's how epoch-wise validation results are visualized using a [W&B Table](../tables/intro/):
WandB Validation Visualization Table
@@ -108,14 +108,14 @@ Download a few images to test the integration on. You can use still images, vide !wget https://raw.githubusercontent.com/wandb/examples/ultralytics/colabs/ultralytics/assets/img5.png ``` -Next, initialize a W&B [run](../runs/intro.md) using `wandb.init`. +Next, initialize a W&B [run](../runs/intro/) using `wandb.init`. ```python # Initialize W&B run wandb.init(project="ultralytics", job_type="inference") ``` -Next, initialize your desired `YOLO` model and invoke the `add_wandb_callback` function on it before you perform inference with the model. This ensures that when you perform inference, it automatically logs the images overlaid with your [interactive overlays for computer vision tasks](../track/log/media#image-overlays-in-tables) along with additional insights in a [`wandb.Table`](../tables/intro.md). +Next, initialize your desired `YOLO` model and invoke the `add_wandb_callback` function on it before you perform inference with the model. This ensures that when you perform inference, it automatically logs the images overlaid with your [interactive overlays for computer vision tasks](../track/log/media#image-overlays-in-tables) along with additional insights in a [`wandb.Table`](../tables/intro/). ```python # Initialize YOLO Model diff --git a/content/guides/integrations/xgboost.md b/content/guides/integrations/xgboost.md index cca58b168..db25c1a5b 100644 --- a/content/guides/integrations/xgboost.md +++ b/content/guides/integrations/xgboost.md @@ -63,7 +63,7 @@ For additional examples, check out the [repository of examples on GitHub](https: ## Tune your hyperparameters with Sweeps -Attaining the maximum performance out of models requires tuning hyperparameters, like tree depth and learning rate. Weights & Biases includes [Sweeps](../sweeps/intro.md), a powerful toolkit for configuring, orchestrating, and analyzing large hyperparameter testing experiments. +Attaining the maximum performance out of models requires tuning hyperparameters, like tree depth and learning rate. Weights & Biases includes [Sweeps](../sweeps/intro/), a powerful toolkit for configuring, orchestrating, and analyzing large hyperparameter testing experiments. {{< cta-button colabLink="https://colab.research.google.com/github/wandb/examples/blob/master/colabs/boosting/Using_W%26B_Sweeps_with_XGBoost.ipynb" >}} diff --git a/content/guides/integrations/yolov5.md b/content/guides/integrations/yolov5.md index 200c4bd97..f58f82488 100644 --- a/content/guides/integrations/yolov5.md +++ b/content/guides/integrations/yolov5.md @@ -18,7 +18,7 @@ All W&B logging features are compatible with data-parallel multi-GPU training, s {{% /alert %}} ## Track core experiments -Simply by installing `wandb`, you'll activate the built-in W&B [logging features](../track/log/intro.md): system metrics, model metrics, and media logged to interactive [Dashboards](../track/workspaces.md). +Simply by installing `wandb`, you'll activate the built-in W&B [logging features](../track/log/intro/): system metrics, model metrics, and media logged to interactive [Dashboards](../track/workspaces/). ```python pip install wandb @@ -34,9 +34,9 @@ Just follow the links printed to the standard out by wandb. By passing a few simple command line arguments to YOLO, you can take advantage of even more W&B features. -* Passing a number to `--save_period` will turn on [model versioning](../model_registry/intro.md). At the end of every `save_period` epochs, the model weights will be saved to W&B. The best-performing model on the validation set will be tagged automatically. +* Passing a number to `--save_period` will turn on [model versioning](../model_registry/intro/). At the end of every `save_period` epochs, the model weights will be saved to W&B. The best-performing model on the validation set will be tagged automatically. * Turning on the `--upload_dataset` flag will also upload the dataset for data versioning. -* Passing a number to `--bbox_interval` will turn on [data visualization](../intro.md). At the end of every `bbox_interval` epochs, the outputs of the model on the validation set will be uploaded to W&B. +* Passing a number to `--bbox_interval` will turn on [data visualization](../intro/). At the end of every `bbox_interval` epochs, the outputs of the model on the validation set will be uploaded to W&B. {{< tabpane text=true >}} {{% tab header="Model Versioning Only" value="modelversioning" %}} diff --git a/content/guides/models/_index.md b/content/guides/models/_index.md index 52075a467..9cc46ee0b 100644 --- a/content/guides/models/_index.md +++ b/content/guides/models/_index.md @@ -13,10 +13,10 @@ W&B Models is the system of record for ML Practitioners who want to organize the With W&B Models, you can: -- Track and visualize all [ML experiments](./track/intro.md). -- Optimize and fine-tune models at scale with [hyperparameter sweeps](./sweeps/intro.md). -- [Maintain a centralized hub of all models](./model_registry/intro.md), with a seamless handoff point to devops and deployment -- Configure custom automations that trigger key workflows for [model CI/CD](./model_registry/model-registry-automations.md). +- Track and visualize all [ML experiments](./track/intro/). +- Optimize and fine-tune models at scale with [hyperparameter sweeps](./sweeps/intro/). +- [Maintain a centralized hub of all models](./model_registry/intro/), with a seamless handoff point to devops and deployment +- Configure custom automations that trigger key workflows for [model CI/CD](./model_registry/model-registry-automations/). diff --git a/content/guides/models/app/features/cascade-settings.md b/content/guides/models/app/features/cascade-settings.md index e56654f04..2bfe3cb52 100644 --- a/content/guides/models/app/features/cascade-settings.md +++ b/content/guides/models/app/features/cascade-settings.md @@ -29,7 +29,7 @@ Configure a workspaces layout to define the overall structure of the workspace. {{< img src="/images/app_ui/workspace_layout_settings.png" alt="" >}} -The workspace layout options page shows whether the workspace generates panels automatically or manually. To adjust a workspace's panel generation mode, refer to [Panels](panels/intro.md). +The workspace layout options page shows whether the workspace generates panels automatically or manually. To adjust a workspace's panel generation mode, refer to [Panels](panels/intro/). This table describes each workspace layout option. @@ -55,9 +55,9 @@ You can edit two main settings within **Line plots** settings: **Data** and **Di | ----- | ----- | | **X axis** | The scale of the x-axis in line plots. The x-axis is set to **Step** by default. See the proceeding table for the list of x-axis options. | | **Range** | Minimum and maximum settings to display for x axis. | -| **Smoothing** | Change the smoothing on the line plot. For more information about smoothing, see [Smooth line plots](./panels/line-plot/smoothing.md). | +| **Smoothing** | Change the smoothing on the line plot. For more information about smoothing, see [Smooth line plots](./panels/line-plot/smoothing/). | | **Outliers** | Rescale to exclude outliers from the default plot min and max scale. | -| **Point aggregation method** | Improve data visualization accuracy and performance. See [Point aggregation](./panels/line-plot/sampling.md) for more information. | +| **Point aggregation method** | Improve data visualization accuracy and performance. See [Point aggregation](./panels/line-plot/sampling/) for more information. | | **Max number of runs or groups** | Limit the number of runs or groups displayed on the line plot. | In addition to **Step**, there are other options for the x-axis: diff --git a/content/guides/models/app/features/custom-charts/_index.md b/content/guides/models/app/features/custom-charts/_index.md index 787b5de5b..5214c928e 100644 --- a/content/guides/models/app/features/custom-charts/_index.md +++ b/content/guides/models/app/features/custom-charts/_index.md @@ -21,7 +21,7 @@ Use **Custom Charts** to create charts that aren't possible right now in the def ### How it works -1. **Log data**: From your script, log [config](../../../../guides/track/config.md) and summary data as you normally would when running with W&B. To visualize a list of multiple values logged at one specific time, use a custom`wandb.Table` +1. **Log data**: From your script, log [config](../../../../guides/track/config/) and summary data as you normally would when running with W&B. To visualize a list of multiple values logged at one specific time, use a custom`wandb.Table` 2. **Customize the chart**: Pull in any of this logged data with a [GraphQL](https://graphql.org) query. Visualize the results of your query with [Vega](https://vega.github.io/vega/), a powerful visualization grammar. 3. **Log the chart**: Call your own preset from your script with `wandb.plot_table()`. diff --git a/content/guides/models/app/features/custom-charts/walkthrough.md b/content/guides/models/app/features/custom-charts/walkthrough.md index b3586363b..148dbc566 100644 --- a/content/guides/models/app/features/custom-charts/walkthrough.md +++ b/content/guides/models/app/features/custom-charts/walkthrough.md @@ -12,7 +12,7 @@ Use custom charts to control the data you're loading in to a panel and its visua ## 1. Log data to W&B -First, log data in your script. Use [wandb.config](../../../../guides/track/config.md) for single points set at the beginning of training, like hyperparameters. Use [wandb.log()](../../../../guides/track/log/intro.md) for multiple points over time, and log custom 2D arrays with `wandb.Table()`. We recommend logging up to 10,000 data points per logged key. +First, log data in your script. Use [wandb.config](../../../../guides/track/config/) for single points set at the beginning of training, like hyperparameters. Use [wandb.log()](../../../../guides/track/log/intro/) for multiple points over time, and log custom 2D arrays with `wandb.Table()`. We recommend logging up to 10,000 data points per logged key. ```python # Logging a custom table of data diff --git a/content/guides/models/app/features/panels/_index.md b/content/guides/models/app/features/panels/_index.md index 3605dba25..f378c7755 100644 --- a/content/guides/models/app/features/panels/_index.md +++ b/content/guides/models/app/features/panels/_index.md @@ -10,7 +10,7 @@ cascade: - url: guides/app/features/panels/:filename --- -Use workspace panel visualizations to explore your [logged data](/ref/python/log.md) by key, visualize the relationships between hyperparameters and output metrics, and more. +Use workspace panel visualizations to explore your [logged data](/ref/python/log/) by key, visualize the relationships between hyperparameters and output metrics, and more. ## Workspace modes @@ -58,7 +58,7 @@ To add a custom panel to your workspace: 1. Select the type of panel you’d like to create. 1. Follow the prompts to configure the panel. -To learn more about the options for each type of panel, refer to the relevant section below, such as [Line plots](line-plot/intro.md) or [Bar plots](bar-plot.md). +To learn more about the options for each type of panel, refer to the relevant section below, such as [Line plots](line-plot/intro/) or [Bar plots](bar-plot/). ## Manage panels diff --git a/content/guides/models/app/features/panels/code.md b/content/guides/models/app/features/panels/code.md index 983caf1d5..ea8b274a7 100644 --- a/content/guides/models/app/features/panels/code.md +++ b/content/guides/models/app/features/panels/code.md @@ -32,7 +32,7 @@ import wandb wandb.init(settings=wandb.Settings(code_dir=".")) ``` -This captures all python source code files in the current directory and all subdirectories as an [artifact](../../../../ref/python/artifact.md). For more control over the types and locations of source code files that are saved, see the [reference docs](../../../../ref/python/run.md#log_code). +This captures all python source code files in the current directory and all subdirectories as an [artifact](../../../../ref/python/artifact/). For more control over the types and locations of source code files that are saved, see the [reference docs](../../../../ref/python/run.md#log_code). ### Set code saving in the UI diff --git a/content/guides/models/app/features/panels/line-plot/reference.md b/content/guides/models/app/features/panels/line-plot/reference.md index 2392f0bc4..54f4cb40c 100644 --- a/content/guides/models/app/features/panels/line-plot/reference.md +++ b/content/guides/models/app/features/panels/line-plot/reference.md @@ -62,7 +62,7 @@ You can aggregate all of the runs by turning on grouping, or group over an indiv ## Smoothing -You can set the [smoothing coefficient](../../../../../support/formula_smoothing_algorithm.md) to be between 0 and 1 where 0 is no smoothing and 1 is maximum smoothing. +You can set the [smoothing coefficient](../../../../../support/formula_smoothing_algorithm/) to be between 0 and 1 where 0 is no smoothing and 1 is maximum smoothing. ## Ignore outliers diff --git a/content/guides/models/app/features/panels/line-plot/sampling.md b/content/guides/models/app/features/panels/line-plot/sampling.md index b5124e5e3..c22fbef7a 100644 --- a/content/guides/models/app/features/panels/line-plot/sampling.md +++ b/content/guides/models/app/features/panels/line-plot/sampling.md @@ -120,7 +120,7 @@ By default, W&B uses full fidelity mode. To enable random sampling, follow these ### Access non sampled data -You can access the complete history of metrics logged during a run using the [W&B Run API](../../../../../ref/python/public-api/run.md). The following example demonstrates how to retrieve and process the loss values from a specific run: +You can access the complete history of metrics logged during a run using the [W&B Run API](../../../../../ref/python/public-api/run/). The following example demonstrates how to retrieve and process the loss values from a specific run: ```python diff --git a/content/guides/models/app/features/panels/parallel-coordinates.md b/content/guides/models/app/features/panels/parallel-coordinates.md index 71b485999..cda54e963 100644 --- a/content/guides/models/app/features/panels/parallel-coordinates.md +++ b/content/guides/models/app/features/panels/parallel-coordinates.md @@ -12,7 +12,7 @@ Parallel coordinates charts summarize the relationship between large numbers of {{< img src="/images/app_ui/parallel_coordinates.gif" alt="" >}} -* **Axes**: Different hyperparameters from [`wandb.config`](../../../../guides/track/config.md) and metrics from [`wandb.log`](../../../../guides/track/log/intro.md). +* **Axes**: Different hyperparameters from [`wandb.config`](../../../../guides/track/config/) and metrics from [`wandb.log`](../../../../guides/track/log/intro/). * **Lines**: Each line represents a single run. Mouse over a line to see a tooltip with details about the run. All lines that match the current filters will be shown, but if you turn off the eye, lines will be grayed out. ## Panel Settings diff --git a/content/guides/models/app/settings-page/_index.md b/content/guides/models/app/settings-page/_index.md index dfa0d9efa..df6c76068 100644 --- a/content/guides/models/app/settings-page/_index.md +++ b/content/guides/models/app/settings-page/_index.md @@ -8,6 +8,6 @@ menu: title: Settings --- -Within your individual user account you can edit: your profile picture, display name, geography location, biography information, emails associated to your account, and manage alerts for runs. You can also use the settings page to link your GitHub repository and delete your account. For more information, see [User settings](./user-settings.md). +Within your individual user account you can edit: your profile picture, display name, geography location, biography information, emails associated to your account, and manage alerts for runs. You can also use the settings page to link your GitHub repository and delete your account. For more information, see [User settings](./user-settings/). -Use the team settings page to invite or remove new members to a team, manage alerts for team runs, change privacy settings, and view and manage storage usage. For more information about team settings, see [Team settings](./team-settings.md). \ No newline at end of file +Use the team settings page to invite or remove new members to a team, manage alerts for team runs, change privacy settings, and view and manage storage usage. For more information about team settings, see [Team settings](./team-settings/). \ No newline at end of file diff --git a/content/guides/models/app/settings-page/storage.md b/content/guides/models/app/settings-page/storage.md index c0e42fcb3..706fef839 100644 --- a/content/guides/models/app/settings-page/storage.md +++ b/content/guides/models/app/settings-page/storage.md @@ -13,11 +13,11 @@ If you are approaching or exceeding your storage limit, there are multiple paths ## Manage storage consumption W&B offers different methods of optimizing your storage consumption: -- Use [reference artifacts](../../artifacts/track-external-files.md) to track files saved outside the W&B system, instead of uploading them to W&B storage. -- Use an [external cloud storage bucket](../features/teams.md) for storage. *(Enterprise only)* +- Use [reference artifacts](../../artifacts/track-external-files/) to track files saved outside the W&B system, instead of uploading them to W&B storage. +- Use an [external cloud storage bucket](../features/teams/) for storage. *(Enterprise only)* ## Delete data You can also choose to delete data to remain under your storage limit. There are several ways to do this: - Delete data interactively with the app UI. -- [Set a TTL policy](../../artifacts/ttl.md) on Artifacts so they are automatically deleted. \ No newline at end of file +- [Set a TTL policy](../../artifacts/ttl/) on Artifacts so they are automatically deleted. \ No newline at end of file diff --git a/content/guides/models/app/settings-page/team-settings.md b/content/guides/models/app/settings-page/team-settings.md index 285f2b3db..a574f6d33 100644 --- a/content/guides/models/app/settings-page/team-settings.md +++ b/content/guides/models/app/settings-page/team-settings.md @@ -39,7 +39,7 @@ Toggle the switch next to the event type you want to receive alerts from. Weight * **Runs finished**: whether a Weights and Biases run successfully finished. * **Run crashed**: if a run has failed to finish. -For more information about how to set up and manage alerts, see [Send alerts with wandb.alert](../../runs/alert.md). +For more information about how to set up and manage alerts, see [Send alerts with wandb.alert](../../runs/alert/). ## Privacy @@ -54,4 +54,4 @@ The **Usage** section describes the total memory usage the team has consumed on ## Storage -The **Storage** section describes the cloud storage bucket configuration that is being used for the team's data. For more information, see [Secure Storage Connector](../features/teams.md#secure-storage-connector) or check out our [W&B Server](../../hosting/data-security/secure-storage-connector.md) docs if you are self-hosting. \ No newline at end of file +The **Storage** section describes the cloud storage bucket configuration that is being used for the team's data. For more information, see [Secure Storage Connector](../features/teams.md#secure-storage-connector) or check out our [W&B Server](../../hosting/data-security/secure-storage-connector/) docs if you are self-hosting. \ No newline at end of file diff --git a/content/guides/models/app/settings-page/teams.md b/content/guides/models/app/settings-page/teams.md index c51c30b9c..ae9d13410 100644 --- a/content/guides/models/app/settings-page/teams.md +++ b/content/guides/models/app/settings-page/teams.md @@ -53,7 +53,7 @@ Select a team role when you invite colleagues to join a team. There are followin - **Member**: A regular member of the team. An admin invites a team member by email. A team member cannot invite other members. Team members can only delete runs and sweep runs created by that member. Suppose you have two members A and B. Member B moves a Run from team B's project to a different project owned by Member A. Member A can not delete the Run Member B moved to Member A's project. Only the member that creates the Run, or the team admin, can delete the run. - **View-Only (Enterprise-only feature)**: View-Only members can view assets within the team such as runs, reports, and workspaces. They can follow and comment on reports, but they can not create, edit, or delete project overview, reports, or runs. View-Only members do not have an API key. - **Custom roles (Enterprise-only feature)**: Custom roles allow organization admins to compose new roles based on either of the **View-Only** or **Member** roles, together with additional permissions to achieve fine-grained access control. Team admins can then assign any of those custom roles to users in their respective teams. Refer to [Introducing Custom Roles for W&B Teams](https://wandb.ai/wandb_fc/announcements/reports/Introducing-Custom-Roles-for-W-B-Teams--Vmlldzo2MTMxMjQ3) for details. -- **Service accounts (Enterprise-only feature)**: Refer to [Use service accounts to automate workflows](../../hosting/iam/service-accounts.md). +- **Service accounts (Enterprise-only feature)**: Refer to [Use service accounts to automate workflows](../../hosting/iam/service-accounts/). {{% alert %}} W&B recommends to have more than one admin in a team. It is a best practice to ensure that admin operations can continue when the primary admin is not available. @@ -80,7 +80,7 @@ The proceeding table lists permissions that apply to all projects across a given |Add/Remove Registry Admins | | | X | X | |Add/Remove Protected Aliases| | | X | | -See the [Model Registry](../../model_registry/access_controls.md) chapter for more information about protected aliases. +See the [Model Registry](../../model_registry/access_controls/) chapter for more information about protected aliases. ### Reports Report permissions grant access to create, view, and edit reports. The proceeding table lists permissions that apply to all reports across a given team. @@ -140,7 +140,7 @@ For example, to add a Twitter follow badge, add `[{{< img src="https://img.shiel ## Team trials -See the [pricing page](https://wandb.ai/site/pricing) for more information on W&B plans. You can download all your data at any time, either using the dashboard UI or the [Export API](../../../ref/python/public-api/README.md). +See the [pricing page](https://wandb.ai/site/pricing) for more information on W&B plans. You can download all your data at any time, either using the dashboard UI or the [Export API](../../../ref/python/public-api/README/). ## Privacy settings @@ -151,4 +151,4 @@ You can see the privacy settings of all team projects on the team settings page: ### Secure storage connector -The team-level secure storage connector allows teams to use their own cloud storage bucket with W&B. This provides greater data access control and data isolation for teams with highly sensitive data or strict compliance requirements. Refer to [Secure Storage Connector](../../hosting/data-security/secure-storage-connector.md) for more information. \ No newline at end of file +The team-level secure storage connector allows teams to use their own cloud storage bucket with W&B. This provides greater data access control and data isolation for teams with highly sensitive data or strict compliance requirements. Refer to [Secure Storage Connector](../../hosting/data-security/secure-storage-connector/) for more information. \ No newline at end of file diff --git a/content/guides/models/app/settings-page/user-settings.md b/content/guides/models/app/settings-page/user-settings.md index 8219efc8b..e25c44392 100644 --- a/content/guides/models/app/settings-page/user-settings.md +++ b/content/guides/models/app/settings-page/user-settings.md @@ -34,12 +34,12 @@ Within the **Beta Features** section you can optionally enable fun add-ons and s ## Alerts -Get notified when your runs crash, finish, or set custom alerts with [wandb.alert()](../../runs/alert.md). Receive notifications either through Email or Slack. Toggle the switch next to the event type you want to receive alerts from. +Get notified when your runs crash, finish, or set custom alerts with [wandb.alert()](../../runs/alert/). Receive notifications either through Email or Slack. Toggle the switch next to the event type you want to receive alerts from. * **Runs finished**: whether a Weights and Biases run successfully finished. * **Run crashed**: notification if a run has failed to finish. -For more information about how to set up and manage alerts, see [Send alerts with wandb.alert](../../runs/alert.md). +For more information about how to set up and manage alerts, see [Send alerts with wandb.alert](../../runs/alert/). ## Personal GitHub integration diff --git a/content/guides/models/automations/model-registry-automations.md b/content/guides/models/automations/model-registry-automations.md index cd97f1326..6bb542ff1 100644 --- a/content/guides/models/automations/model-registry-automations.md +++ b/content/guides/models/automations/model-registry-automations.md @@ -26,7 +26,7 @@ An *event* is a change that takes place in the W&B ecosystem. The Model Registry - Use **Linking a new artifact to a registered model** to test new model candidates. - Use **Adding a new alias to a version of the registered model** to specify an alias that represents a special step of your workflow, like `deploy`, and any time a new model version has that alias applied. -See [Link a model version](./link-model-version.md) and [Create a custom alias](../artifacts/create-a-custom-alias.md). +See [Link a model version](./link-model-version/) and [Create a custom alias](../artifacts/create-a-custom-alias/). ## Create a webhook automation @@ -44,7 +44,7 @@ To use a secret in your webhook, you must first add that secret to your team's s {{% alert %}} * Only W&B Admins can create, edit, or delete a secret. * Skip this section if the external server you send HTTP POST requests to does not use secrets. -* Secrets are also available if you use [W&B Server](../hosting/intro.md) in an Azure, GCP, or AWS deployment. Connect with your W&B account team to discuss how you can use secrets in W&B if you use a different deployment type. +* Secrets are also available if you use [W&B Server](../hosting/intro/) in an Azure, GCP, or AWS deployment. Connect with your W&B account team to discuss how you can use secrets in W&B if you use a different deployment type. {{% /alert %}} There are two types of secrets W&B suggests that you create when you use a webhook automation: @@ -179,7 +179,7 @@ Verify that your access tokens have required set of permissions to trigger your ${entity_name} --> "" ``` - Use template strings to dynamically pass context from W&B to GitHub Actions and other tools. If those tools can call Python scripts, they can consume the registered model artifacts through the [W&B API](../artifacts/download-and-use-an-artifact.md). + Use template strings to dynamically pass context from W&B to GitHub Actions and other tools. If those tools can call Python scripts, they can consume the registered model artifacts through the [W&B API](../artifacts/download-and-use-an-artifact/). For more information about repository dispatch, see the [official documentation on the GitHub Marketplace](https://github.com/marketplace/actions/repository-dispatch). diff --git a/content/guides/models/automations/project-scoped-automations.md b/content/guides/models/automations/project-scoped-automations.md index bee07a588..b360ea1e5 100644 --- a/content/guides/models/automations/project-scoped-automations.md +++ b/content/guides/models/automations/project-scoped-automations.md @@ -14,7 +14,7 @@ Create an automation that triggers when an artifact is changed. Use artifact aut {{% alert %}} Artifact automations are scoped to a project. This means that only events within a project will trigger an artifact automation. -This is in contrast to automations created in the W&B Model Registry. Automations created in the model registry are in scope of the Model Registry. They are triggered when events are performed on model versions linked to the [Model Registry](../model_registry/intro.md). For information on how to create an automations for model versions, see the [Automations for Model CI/CD](../model_registry/model-registry-automations.md) page in the [Model Registry chapter](../model_registry/intro.md). +This is in contrast to automations created in the W&B Model Registry. Automations created in the model registry are in scope of the Model Registry. They are triggered when events are performed on model versions linked to the [Model Registry](../model_registry/intro/). For information on how to create an automations for model versions, see the [Automations for Model CI/CD](../model_registry/model-registry-automations/) page in the [Model Registry chapter](../model_registry/intro/). {{% /alert %}} @@ -42,7 +42,7 @@ To use a secret in your webhook, you must first add that secret to your team's s {{% alert %}} * Only W&B Admins can create, edit, or delete a secret. * Skip this section if the external server you send HTTP POST requests to does not use secrets. -* Secrets are also available if you use [W&B Server](../hosting/intro.md) in an Azure, GCP, or AWS deployment. Connect with your W&B account team to discuss how you can use secrets in W&B if you use a different deployment type. +* Secrets are also available if you use [W&B Server](../hosting/intro/) in an Azure, GCP, or AWS deployment. Connect with your W&B account team to discuss how you can use secrets in W&B if you use a different deployment type. {{% /alert %}} @@ -170,7 +170,7 @@ ${project_name} --> "" ${entity_name} --> "" ``` -Use template strings to dynamically pass context from W&B to GitHub Actions and other tools. If those tools can call Python scripts, they can consume W&B artifacts through the [W&B API](../artifacts/download-and-use-an-artifact.md). +Use template strings to dynamically pass context from W&B to GitHub Actions and other tools. If those tools can call Python scripts, they can consume W&B artifacts through the [W&B API](../artifacts/download-and-use-an-artifact/). For more information about repository dispatch, see the [official documentation on the GitHub Marketplace](https://github.com/marketplace/actions/repository-dispatch). {{% /tab %}} diff --git a/content/guides/models/registry/_index.md b/content/guides/models/registry/_index.md index c719f1bf5..f5fbf8d29 100644 --- a/content/guides/models/registry/_index.md +++ b/content/guides/models/registry/_index.md @@ -17,17 +17,17 @@ W&B Registry is now in public preview. Visit [this](#enable-wb-registry) section {{% /alert %}} -W&B Registry is a curated central repository of [artifact](../artifacts/intro.md) versions within your organization. Users who [have permission](./configure_registry.md) within your organization can [download](./download_use_artifact.md), share, and collaboratively manage the lifecycle of all artifacts, regardless of the team that user belongs to. +W&B Registry is a curated central repository of [artifact](../artifacts/intro/) versions within your organization. Users who [have permission](./configure_registry/) within your organization can [download](./download_use_artifact/), share, and collaboratively manage the lifecycle of all artifacts, regardless of the team that user belongs to. -You can use the Registry to [track artifact versions](./link_version.md), audit the history of an artifact's usage and changes, ensure governance and compliance of your artifacts, and [automate downstream processes such as model CI/CD](../automations/intro.md). +You can use the Registry to [track artifact versions](./link_version/), audit the history of an artifact's usage and changes, ensure governance and compliance of your artifacts, and [automate downstream processes such as model CI/CD](../automations/intro/). In summary, use W&B Registry to: -- [Promote](./link_version.md) artifact versions that satisfy a machine learning task to other users in your organization. -- Organize [artifacts with tags](./organize-with-tags.md) so that you can find or reference specific artifacts. -- Track an [artifact’s lineage](../model_registry/model-lineage.md) and audit the history of changes. -- [Automate](../model_registry/model-registry-automations.md) downstream processes such as model CI/CD. -- [Limit who in your organization](./configure_registry.md) can access artifacts in each registry. +- [Promote](./link_version/) artifact versions that satisfy a machine learning task to other users in your organization. +- Organize [artifacts with tags](./organize-with-tags/) so that you can find or reference specific artifacts. +- Track an [artifact’s lineage](../model_registry/model-lineage/) and audit the history of changes. +- [Automate](../model_registry/model-registry-automations/) downstream processes such as model CI/CD. +- [Limit who in your organization](./configure_registry/) can access artifacts in each registry. @@ -37,13 +37,13 @@ The preceding image shows the Registry App with "Model" and "Dataset" core regis ## Learn the basics -Each organization initially contains two registries that you can use to organize your model and dataset artifacts called **Models** and **Datasets**, respectively. You can create [additional registries to organize other artifact types based on your organization's needs](./registry_types.md). +Each organization initially contains two registries that you can use to organize your model and dataset artifacts called **Models** and **Datasets**, respectively. You can create [additional registries to organize other artifact types based on your organization's needs](./registry_types/). -Each [registry](./configure_registry.md) consists of one or more [collections](./create_collection.md). Each collection represents a distinct task or use case. +Each [registry](./configure_registry/) consists of one or more [collections](./create_collection/). Each collection represents a distinct task or use case. {{< img src="/images/registry/homepage_registry.png" >}} -To add an artifact to a registry, you first log a [specific artifact version to W&B](../artifacts/create-a-new-artifact-version.md). Each time you log an artifact, W&B automatically assigns a version to that artifact. Artifact versions use 0 indexing, so the first version is `v0`, the second version is `v1`, and so on. +To add an artifact to a registry, you first log a [specific artifact version to W&B](../artifacts/create-a-new-artifact-version/). Each time you log an artifact, W&B automatically assigns a version to that artifact. Artifact versions use 0 indexing, so the first version is `v0`, the second version is `v1`, and so on. Once you log an artifact to W&B, you can then link that specific artifact version to a collection in the registry. @@ -51,7 +51,7 @@ Once you log an artifact to W&B, you can then link that specific artifact versio The term "link" refers to pointers that connect where W&B stores the artifact and where the artifact is accessible in the registry. W&B does not duplicate artifacts when you link an artifact to a collection. {{% /alert %}} -As an example, the proceeding code example shows how to log and link a fake model artifact called "my_model.txt" to a collection named "first-collection" in the [core Model registry](./registry_types.md). More specifically, the code accomplishes the following: +As an example, the proceeding code example shows how to log and link a fake model artifact called "my_model.txt" to a collection named "first-collection" in the [core Model registry](./registry_types/). More specifically, the code accomplishes the following: 1. Initialize a W&B run. 2. Log the artifact to W&B. @@ -129,7 +129,7 @@ Depending on your use case, explore the following resources to get started with The legacy Model Registry is scheduled for deprecation with the exact date not yet decided. Before deprecating the legacy Model Registry, W&B will migrate the contents of the legacy Model Registry to the W&B Registry. -See [Migrating from legacy Model Registry](./model_registry_eol.md) for more information about the migration process from the legacy Model Registry to W&B Registry. +See [Migrating from legacy Model Registry](./model_registry_eol/) for more information about the migration process from the legacy Model Registry to W&B Registry. Until the migration occurs, W&B supports both the legacy Model Registry and the new Registry. diff --git a/content/guides/models/registry/download_use_artifact.md b/content/guides/models/registry/download_use_artifact.md index 004ade052..bad5d00fd 100644 --- a/content/guides/models/registry/download_use_artifact.md +++ b/content/guides/models/registry/download_use_artifact.md @@ -52,7 +52,7 @@ fetched_artifact = run.use_artifact(artifact_or_name = artifact_name) download_path = fetched_artifact.download() ``` -The `.use_artifact()` method both creates a [run](../runs/intro.md) and marks the artifact you download as the input to that run. +The `.use_artifact()` method both creates a [run](../runs/intro/) and marks the artifact you download as the input to that run. Marking an artifact as the input to a run enables W&B to track the lineage of that artifact. If you do not want to create a run, you can use the `wandb.Api()` object to access the artifact: diff --git a/content/guides/models/registry/link_version.md b/content/guides/models/registry/link_version.md index b51a4544d..989d89a17 100644 --- a/content/guides/models/registry/link_version.md +++ b/content/guides/models/registry/link_version.md @@ -14,7 +14,7 @@ When you link an artifact to a registry, this "publishes" that artifact to that In other words, linking an artifact to a registry collection brings that artifact version from a private, project-level scope, to a shared organization level scope. {{% alert %}} -The term "type" refers to the artifact object's type. When you create an artifact object ([`wandb.Artifact`](../../ref/python/artifact.md)), or log an artifact ([`wandb.init.log_artifact`](../../ref/python/run.md#log_artifact)), you specify a type for the `type` parameter. +The term "type" refers to the artifact object's type. When you create an artifact object ([`wandb.Artifact`](../../ref/python/artifact/)), or log an artifact ([`wandb.init.log_artifact`](../../ref/python/run.md#log_artifact)), you specify a type for the `type` parameter. {{% /alert %}} @@ -23,7 +23,7 @@ The term "type" refers to the artifact object's type. When you create an artifac Link an artifact version to a collection interactively or programmatically. {{% alert %}} -Before you link an artifact to a registry, check the types of artifacts that collection permits. For more information about collection types, see "Collection types" within [Create a collection](./create_collection.md). +Before you link an artifact to a registry, check the types of artifacts that collection permits. For more information about collection types, see "Collection types" within [Create a collection](./create_collection/). {{% /alert %}} Based on your use case, follow the instructions described in the tabs below to link an artifact version. @@ -106,7 +106,7 @@ If you want to link an artifact version to the Model registry or the Dataset reg @@ -170,7 +170,7 @@ You can confirm the name of your team by: ```python artifact = wandb.Artifact(name="", type="") ``` - For more information on how to log artifacts, see [Construct artifacts](../artifacts/construct-an-artifact.md). + For more information on how to log artifacts, see [Construct artifacts](../artifacts/construct-an-artifact/). 3. If an artifact is logged to your personal entity, you will need to re-log it to an entity within your organization. ### Confirm the path of a registry in the W&B App UI diff --git a/content/guides/models/registry/model_registry/_index.md b/content/guides/models/registry/model_registry/_index.md index 495d6598c..06f8c2511 100644 --- a/content/guides/models/registry/model_registry/_index.md +++ b/content/guides/models/registry/model_registry/_index.md @@ -12,10 +12,10 @@ cascade: --- {{% alert %}} -W&B will no longer support W&B Model Registry after 2024. Users are encouraged to instead use [W&B Registry](../registry/intro.md) for linking and sharing their model artifacts versions. W&B Registry broadens the capabilities of the legacy W&B Model Registry. For more information about W&B Registry, see the [Registry docs](../registry/intro.md). +W&B will no longer support W&B Model Registry after 2024. Users are encouraged to instead use [W&B Registry](../registry/intro/) for linking and sharing their model artifacts versions. W&B Registry broadens the capabilities of the legacy W&B Model Registry. For more information about W&B Registry, see the [Registry docs](../registry/intro/). -W&B will migrate existing model artifacts linked to the legacy Model Registry to the new W&B Registry in the Fall or early Winter of 2024. See [Migrating from legacy Model Registry](../registry/model_registry_eol.md) for information about the migration process. +W&B will migrate existing model artifacts linked to the legacy Model Registry to the new W&B Registry in the Fall or early Winter of 2024. See [Migrating from legacy Model Registry](../registry/model_registry_eol/) for information about the migration process. {{% /alert %}} The W&B Model Registry houses a team's trained models where ML Practitioners can publish candidates for production to be consumed by downstream teams and stakeholders. It is used to house staged/candidate models and manage workflows associated with staging. @@ -24,8 +24,8 @@ The W&B Model Registry houses a team's trained models where ML Practitioners can With W&B Model Registry, you can: -* [Bookmark your best model versions for each machine learning task.](./link-model-version.md) -* [Automate](./model-registry-automations.md) downstream processes and model CI/CD. +* [Bookmark your best model versions for each machine learning task.](./link-model-version/) +* [Automate](./model-registry-automations/) downstream processes and model CI/CD. * Move model versions through its ML lifecycle; from staging to production. * Track a model's lineage and audit the history of changes to production models. @@ -60,7 +60,7 @@ run.link_model(path="./my_model.h5", registered_model_name="MNIST") run.finish() ``` -4. **Connect model transitions to CI/DC workflows**: transition candidate models through workflow stages and [automate downstream actions](./model-registry-automations.md) with webhooks or jobs. +4. **Connect model transitions to CI/DC workflows**: transition candidate models through workflow stages and [automate downstream actions](./model-registry-automations/) with webhooks or jobs. ## How to get started @@ -69,11 +69,11 @@ Depending on your use case, explore the following resources to get started with * Check out the two-part video series: 1. [Logging and registering models](https://www.youtube.com/watch?si=MV7nc6v-pYwDyS-3&v=ZYipBwBeSKE&feature=youtu.be) 2. [Consuming models and automating downstream processes](https://www.youtube.com/watch?v=8PFCrDSeHzw) in the Model Registry. -* Read the [models walkthrough](./walkthrough.md) for a step-by-step outline of the W&B Python SDK commands you could use to create, track, and use a dataset artifact. +* Read the [models walkthrough](./walkthrough/) for a step-by-step outline of the W&B Python SDK commands you could use to create, track, and use a dataset artifact. * Learn about: - * [Protected models and access control](./access_controls.md). - * [How to connect the Model Registry to CI/CD processes](./model-registry-automations.md). - * Set up [Slack notifications](./notifications.md) when a new model version is linked to a registered model. + * [Protected models and access control](./access_controls/). + * [How to connect the Model Registry to CI/CD processes](./model-registry-automations/). + * Set up [Slack notifications](./notifications/) when a new model version is linked to a registered model. * Review [this](https://wandb.ai/wandb_fc/model-registry-reports/reports/What-is-an-ML-Model-Registry---Vmlldzo1MTE5MjYx) report on how the Model Registry fits into your ML workflow and the benefits of using one for model management. * Take the W&B [Enterprise Model Management](https://www.wandb.courses/courses/enterprise-model-management) course and learn how to: * Use the W&B Model Registry to manage and version your models, track lineage, and promote models through different lifecycle stages diff --git a/content/guides/models/registry/model_registry/consume-models.md b/content/guides/models/registry/model_registry/consume-models.md index aaf72a6f7..bcac916d9 100644 --- a/content/guides/models/registry/model_registry/consume-models.md +++ b/content/guides/models/registry/model_registry/consume-models.md @@ -13,7 +13,7 @@ Use the W&B Python SDK to download a model artifact that you linked to the Model {{% alert %}} You are responsible for providing additional Python functions, API calls to reconstruct, deserialize your model into a form that you can work with. -W&B suggests that you document information on how to load models into memory with model cards. For more information, see the [Document machine learning models](./create-model-cards.md) page. +W&B suggests that you document information on how to load models into memory with model cards. For more information, see the [Document machine learning models](./create-model-cards/) page. {{% /alert %}} @@ -62,7 +62,7 @@ downloaded_model_path = run.use_model(name=f"{entity/project/model_artifact_name {{% alert title="Planned deprecation for W&B Model Registry in 2024" %}} The proceeding tabs demonstrate how to consume model artifacts using the soon to be deprecated Model Registry. -Use the W&B Registry to track, organize and consume model artifacts. For more information see the [Registry docs](../registry/intro.md). +Use the W&B Registry to track, organize and consume model artifacts. For more information see the [Registry docs](../registry/intro/). {{% /alert %}} {{< tabpane text=true >}} diff --git a/content/guides/models/registry/model_registry/link-model-version.md b/content/guides/models/registry/model_registry/link-model-version.md index d09973257..17daf5a6a 100644 --- a/content/guides/models/registry/model_registry/link-model-version.md +++ b/content/guides/models/registry/model_registry/link-model-version.md @@ -14,7 +14,7 @@ Link a model version to a registered model with the W&B App or programmatically ## Programmatically link a model -Use the [`link_model`](../../ref/python/run.md#link_model) method to programmatically log model files to a W&B run and link it to the [W&B Model Registry](./intro.md). +Use the [`link_model`](../../ref/python/run.md#link_model) method to programmatically log model files to a W&B run and link it to the [W&B Model Registry](./intro/). Ensure to replace other the values enclosed in `<>` with your own: diff --git a/content/guides/models/registry/model_registry/log-model-to-experiment.md b/content/guides/models/registry/model_registry/log-model-to-experiment.md index a730cc956..0d8a7c2ec 100644 --- a/content/guides/models/registry/model_registry/log-model-to-experiment.md +++ b/content/guides/models/registry/model_registry/log-model-to-experiment.md @@ -12,7 +12,7 @@ weight: 3 Track a model, the model's dependencies, and other information relevant to that model with the W&B Python SDK. -Under the hood, W&B creates a lineage of [model artifact](./model-management-concepts.md#model-artifact) that you can view with the W&B App UI or programmatically with the W&B Python SDK. See the [Create model lineage map](./model-lineage.md) for more information. +Under the hood, W&B creates a lineage of [model artifact](./model-management-concepts.md#model-artifact) that you can view with the W&B App UI or programmatically with the W&B Python SDK. See the [Create model lineage map](./model-lineage/) for more information. ## How to log a model diff --git a/content/guides/models/registry/model_registry/model-management-concepts.md b/content/guides/models/registry/model_registry/model-management-concepts.md index 33714d60b..a1a6bc9ed 100644 --- a/content/guides/models/registry/model_registry/model-management-concepts.md +++ b/content/guides/models/registry/model_registry/model-management-concepts.md @@ -42,7 +42,7 @@ Model tags are keywords or labels that belong to one or more registered models. Use model tags to organize registered models into categories and to search over those categories in the Model Registry's search bar. Model tags appear at the top of the Registered Model Card. You might choose to use them to group your registered models by ML task, owning team, or priority. The same model tag can be added to multiple registered models to allow for grouping. {{% alert %}} -Model tags, which are labels applied to registered models for grouping and discoverability, are different from [model aliases](#model-alias). Model aliases are unique identifiers or nicknames that you use to fetch a model version programatically. To learn more about using tags to organize the tasks in your Model Registry, see [Organize models](./organize-models.md). +Model tags, which are labels applied to registered models for grouping and discoverability, are different from [model aliases](#model-alias). Model aliases are unique identifiers or nicknames that you use to fetch a model version programatically. To learn more about using tags to organize the tasks in your Model Registry, see [Organize models](./organize-models/). {{% /alert %}} diff --git a/content/guides/models/registry/model_registry/walkthrough.md b/content/guides/models/registry/model_registry/walkthrough.md index fd54ea2e9..dd2e44277 100644 --- a/content/guides/models/registry/model_registry/walkthrough.md +++ b/content/guides/models/registry/model_registry/walkthrough.md @@ -68,7 +68,7 @@ def generate_raw_data(train_size=6000): (x_train, y_train), (x_eval, y_eval) = generate_raw_data() ``` -Next, upload the dataset to W&B. To do this, create an [artifact](../artifacts/intro.md) object and add the dataset to that artifact. +Next, upload the dataset to W&B. To do this, create an [artifact](../artifacts/intro/) object and add the dataset to that artifact. ```python project = "model-registry-dev" @@ -116,7 +116,7 @@ Train a model with the artifact dataset you created in the previous step. ### Declare dataset artifact as an input to the run -Declare the dataset artifact you created in a previous step as the input to the W&B run. This is particularly useful in the context of logging models because declaring an artifact as an input to a run lets you track the dataset (and the version of the dataset) used to train a specific model. W&B uses the information collected to create a [lineage map](./model-lineage.md). +Declare the dataset artifact you created in a previous step as the input to the W&B run. This is particularly useful in the context of logging models because declaring an artifact as an input to a run lets you track the dataset (and the version of the dataset) used to train a specific model. W&B uses the information collected to create a [lineage map](./model-lineage/). Use the `use_artifact` API to both declare the dataset artifact as the input of the run and to retrieve the artifact itself. @@ -143,7 +143,7 @@ x_train = train_table.get_column("x_train", convert_to="numpy") y_train = train_table.get_column("y_train", convert_to="numpy") ``` -For more information about tracking the inputs and output of a model, see [Create model lineage](./model-lineage.md) map. +For more information about tracking the inputs and output of a model, see [Create model lineage](./model-lineage/) map. ### Define and train model @@ -210,7 +210,7 @@ model.save(path) ## Log and link a model to the Model Registry -Use the [`link_model`](../../ref/python/run.md#link_model) API to log model one ore more files to a W&B run and link it to the [W&B Model Registry](./intro.md). +Use the [`link_model`](../../ref/python/run.md#link_model) API to log model one ore more files to a W&B run and link it to the [W&B Model Registry](./intro/). ```python path = "./model.h5" diff --git a/content/guides/models/registry/model_registry_eol.md b/content/guides/models/registry/model_registry_eol.md index 612bf0871..8176d5163 100644 --- a/content/guides/models/registry/model_registry_eol.md +++ b/content/guides/models/registry/model_registry_eol.md @@ -7,7 +7,7 @@ title: Migrate from legacy Model Registry weight: 8 --- -W&B will transition assets from the legacy [W&B Model Registry](../model_registry/intro.md) to the new [W&B Registry](./intro.md). This migration will be fully managed and triggered by W&B, requiring no intervention from users. The process is designed to be as seamless as possible, with minimal disruption to existing workflows. +W&B will transition assets from the legacy [W&B Model Registry](../model_registry/intro/) to the new [W&B Registry](./intro/). This migration will be fully managed and triggered by W&B, requiring no intervention from users. The process is designed to be as seamless as possible, with minimal disruption to existing workflows. The transition will take place once the new W&B Registry includes all the functionalities currently available in the Model Registry. W&B will attempt to preserve current workflows, codebases, and references. @@ -31,7 +31,7 @@ Artifacts linked to the legacy Model Registry have team level visibility. This m Restrict who can view and access a custom registry. You can restrict visibility to a registry when you create a custom registry or after you create a custom registry. In a Restricted registry, only selected members can access the content, maintaining privacy and control. For more information about registry visibility, see [Registry visibility types](./configure_registry.md#registry-visibility-types). ### Create custom registries -Unlike the legacy Model Registry, W&B Registry is not limited to models or dataset registries. You can create custom registries tailored to specific workflows or project needs, capable of holding any arbitrary object type. This flexibility allows teams to organize and manage artifacts according to their unique requirements. For more information on how to create a custom registry, see [Create a custom registry](./create_registry.md). +Unlike the legacy Model Registry, W&B Registry is not limited to models or dataset registries. You can create custom registries tailored to specific workflows or project needs, capable of holding any arbitrary object type. This flexibility allows teams to organize and manage artifacts according to their unique requirements. For more information on how to create a custom registry, see [Create a custom registry](./create_registry/). {{< img src="/images/registry/mode_reg_eol.png" alt="" >}} @@ -61,7 +61,7 @@ W&B will migrate registered models (now called collections) and associated artif ### Team visibility to organization visibility -After the migration, your model registry will have organization level visibility. You can restrict who has access to a registry by [assigning roles](./configure_registry.md). This helps ensure that only specific members have access to specific registries. +After the migration, your model registry will have organization level visibility. You can restrict who has access to a registry by [assigning roles](./configure_registry/). This helps ensure that only specific members have access to specific registries. The migration will preserve existing permission boundaries of your current team-level registered models (soon to be called collections) in the legacy W&B Model Registry. Permissions currently defined in the legacy Model Registry will be preserved in the new Registry. This means that collections currently restricted to specific team members will remain protected during and after the migration. @@ -83,7 +83,7 @@ Users are encouraged to explore the new features and capabilities available in t Support is available if you are interested in trying the W&B Registry early, or for new users that prefer to start with Registry and not the legacy W&B Model Registry. Contact support@wandb.com or your Sales MLE to enable this functionality. Note that any early migration will be into a BETA version. The BETA version of W&B Registry might not have all the functionality or features of the legacy Model Registry. -For more details and to learn about the full range of features in the W&B Registry, visit the [W&B Registry Guide](./intro.md). +For more details and to learn about the full range of features in the W&B Registry, visit the [W&B Registry Guide](./intro/). ## FAQs diff --git a/content/guides/models/registry/registry_types.md b/content/guides/models/registry/registry_types.md index 785b11ab2..0aa5a2e64 100644 --- a/content/guides/models/registry/registry_types.md +++ b/content/guides/models/registry/registry_types.md @@ -33,7 +33,7 @@ For example, you might create a registry called "Benchmark_Datasets" for organiz A custom registry can have either [organization or restricted visibility](./configure_registry.md#registry-visibility-types). A registry admin can change the visibility of a custom registry from organization to restricted. However, the registry admin can not change a custom registry's visibility from restricted to organizational visibility. -For information on how to create a custom registry, see [Create a custom registry](./create_collection.md). +For information on how to create a custom registry, see [Create a custom registry](./create_collection/). ## Summary diff --git a/content/guides/models/sweeps/_index.md b/content/guides/models/sweeps/_index.md index 043f1b5fa..bfd7e4c37 100644 --- a/content/guides/models/sweeps/_index.md +++ b/content/guides/models/sweeps/_index.md @@ -18,7 +18,7 @@ Use W&B Sweeps to automate hyperparameter search and visualize rich, interactive {{< img src="/images/sweeps/intro_what_it_is.png" alt="Draw insights from large hyperparameter tuning experiments with interactive dashboards." >}} ### How it works -Create a sweep with two [W&B CLI](../../ref/cli/README.md) commands: +Create a sweep with two [W&B CLI](../../ref/cli/README/) commands: 1. Initialize a sweep @@ -34,7 +34,7 @@ wandb agent ``` {{% alert %}} -The preceding code snippet, and the colab linked on this page, show how to initialize and create a sweep with wht W&B CLI. See the Sweeps [Walkthrough](./walkthrough.md) for a step-by-step outline of the W&B Python SDK commands to use to define a sweep configuration, initialize a sweep, and start a sweep. +The preceding code snippet, and the colab linked on this page, show how to initialize and create a sweep with wht W&B CLI. See the Sweeps [Walkthrough](./walkthrough/) for a step-by-step outline of the W&B Python SDK commands to use to define a sweep configuration, initialize a sweep, and start a sweep. {{% /alert %}} @@ -43,14 +43,14 @@ The preceding code snippet, and the colab linked on this page, show how to initi Depending on your use case, explore the following resources to get started with W&B Sweeps: -* Read through the [sweeps walkthrough](./walkthrough.md) for a step-by-step outline of the W&B Python SDK commands to use to define a sweep configuration, initialize a sweep, and start a sweep. +* Read through the [sweeps walkthrough](./walkthrough/) for a step-by-step outline of the W&B Python SDK commands to use to define a sweep configuration, initialize a sweep, and start a sweep. * Explore this chapter to learn how to: - * [Add W&B to your code](./add-w-and-b-to-your-code.md) - * [Define sweep configuration](./define-sweep-configuration.md) - * [Initialize sweeps](./initialize-sweeps.md) - * [Start sweep agents](./start-sweep-agents.md) - * [Visualize sweep results](./visualize-sweep-results.md) -* Explore a [curated list of Sweep experiments](./useful-resources.md) that explore hyperparameter optimization with W&B Sweeps. Results are stored in W&B Reports. + * [Add W&B to your code](./add-w-and-b-to-your-code/) + * [Define sweep configuration](./define-sweep-configuration/) + * [Initialize sweeps](./initialize-sweeps/) + * [Start sweep agents](./start-sweep-agents/) + * [Visualize sweep results](./visualize-sweep-results/) +* Explore a [curated list of Sweep experiments](./useful-resources/) that explore hyperparameter optimization with W&B Sweeps. Results are stored in W&B Reports. For a step-by-step video, see: [Tune Hyperparameters Easily with W&B Sweeps](https://www.youtube.com/watch?v=9zrmUIlScdY\&ab_channel=Weights%26Biases). diff --git a/content/guides/models/sweeps/add-w-and-b-to-your-code.md b/content/guides/models/sweeps/add-w-and-b-to-your-code.md index aca36518f..8c09df0e6 100644 --- a/content/guides/models/sweeps/add-w-and-b-to-your-code.md +++ b/content/guides/models/sweeps/add-w-and-b-to-your-code.md @@ -67,12 +67,12 @@ The following code examples demonstrate how to add the W&B Python SDK into your To create a W&B Sweep, we added the following to the code example: 1. Line 1: Import the Weights & Biases Python SDK. -2. Line 6: Create a dictionary object where the key-value pairs define the sweep configuration. In the proceeding example, the batch size (`batch_size`), epochs (`epochs`), and the learning rate (`lr`) hyperparameters are varied during each sweep. For more information on how to create a sweep configuration, see [Define sweep configuration](./define-sweep-configuration.md). -3. Line 19: Pass the sweep configuration dictionary to [`wandb.sweep`](../../ref/python/sweep.md). This initializes the sweep. This returns a sweep ID (`sweep_id`). For more information on how to initialize sweeps, see [Initialize sweeps](./initialize-sweeps.md). -4. Line 33: Use the [`wandb.init()`](../../ref/python/init.md) API to generate a background process to sync and log data as a [W&B Run](../../ref/python/run.md). +2. Line 6: Create a dictionary object where the key-value pairs define the sweep configuration. In the proceeding example, the batch size (`batch_size`), epochs (`epochs`), and the learning rate (`lr`) hyperparameters are varied during each sweep. For more information on how to create a sweep configuration, see [Define sweep configuration](./define-sweep-configuration/). +3. Line 19: Pass the sweep configuration dictionary to [`wandb.sweep`](../../ref/python/sweep/). This initializes the sweep. This returns a sweep ID (`sweep_id`). For more information on how to initialize sweeps, see [Initialize sweeps](./initialize-sweeps/). +4. Line 33: Use the [`wandb.init()`](../../ref/python/init/) API to generate a background process to sync and log data as a [W&B Run](../../ref/python/run/). 5. Line 37-39: (Optional) define values from `wandb.config` instead of defining hard coded values. -6. Line 45: Log the metric we want to optimize with [`wandb.log`](../../ref/python/log.md). You must log the metric defined in your configuration. Within the configuration dictionary (`sweep_configuration` in this example) we defined the sweep to maximize the `val_acc` value). -7. Line 54: Start the sweep with the [`wandb.agent`](../../ref/python/agent.md) API call. Provide the sweep ID (line 19), the name of the function the sweep will execute (`function=main`), and set the maximum number of runs to try to four (`count=4`). For more information on how to start W&B Sweep, see [Start sweep agents](./start-sweep-agents.md). +6. Line 45: Log the metric we want to optimize with [`wandb.log`](../../ref/python/log/). You must log the metric defined in your configuration. Within the configuration dictionary (`sweep_configuration` in this example) we defined the sweep to maximize the `val_acc` value). +7. Line 54: Start the sweep with the [`wandb.agent`](../../ref/python/agent/) API call. Provide the sweep ID (line 19), the name of the function the sweep will execute (`function=main`), and set the maximum number of runs to try to four (`count=4`). For more information on how to start W&B Sweep, see [Start sweep agents](./start-sweep-agents/). ```python showLineNumbers @@ -163,7 +163,7 @@ parameters: values: [5, 10, 15] ``` -For more information on how to create a W&B Sweep configuration, see [Define sweep configuration](./define-sweep-configuration.md). +For more information on how to create a W&B Sweep configuration, see [Define sweep configuration](./define-sweep-configuration/). Note that you must provide the name of your Python script for the `program` key in your YAML file. @@ -171,9 +171,9 @@ Next, we add the following to the code example: 1. Line 1-2: Import the Wieghts & Biases Python SDK (`wandb`) and PyYAML (`yaml`). PyYAML is used to read in our YAML configuration file. 2. Line 18: Read in the configuration file. -3. Line 21: Use the [`wandb.init()`](../../ref/python/init.md) API to generate a background process to sync and log data as a [W&B Run](../../ref/python/run.md). We pass the config object to the config parameter. +3. Line 21: Use the [`wandb.init()`](../../ref/python/init/) API to generate a background process to sync and log data as a [W&B Run](../../ref/python/run/). We pass the config object to the config parameter. 4. Line 25 - 27: Define hyperparameter values from `wandb.config` instead of using hard coded values. -5. Line 33-39: Log the metric we want to optimize with [`wandb.log`](../../ref/python/log.md). You must log the metric defined in your configuration. Within the configuration dictionary (`sweep_configuration` in this example) we defined the sweep to maximize the `val_acc` value. +5. Line 33-39: Log the metric we want to optimize with [`wandb.log`](../../ref/python/log/). You must log the metric defined in your configuration. Within the configuration dictionary (`sweep_configuration` in this example) we defined the sweep to maximize the `val_acc` value. ```python showLineNumbers @@ -233,21 +233,21 @@ Navigate to your CLI. Within your CLI, set a maximum number of runs the sweep ag NUM=5 ``` -Next, initialize the sweep with the [`wandb sweep`](../../ref/cli/wandb-sweep.md) command. Provide the name of the YAML file. Optionally provide the name of the project for the project flag (`--project`): +Next, initialize the sweep with the [`wandb sweep`](../../ref/cli/wandb-sweep/) command. Provide the name of the YAML file. Optionally provide the name of the project for the project flag (`--project`): ```bash wandb sweep --project sweep-demo-cli config.yaml ``` -This returns a sweep ID. For more information on how to initialize sweeps, see [Initialize sweeps](./initialize-sweeps.md). +This returns a sweep ID. For more information on how to initialize sweeps, see [Initialize sweeps](./initialize-sweeps/). -Copy the sweep ID and replace `sweepID` in the proceeding code snippet to start the sweep job with the [`wandb agent`](../../ref/cli/wandb-agent.md) command: +Copy the sweep ID and replace `sweepID` in the proceeding code snippet to start the sweep job with the [`wandb agent`](../../ref/cli/wandb-agent/) command: ```bash wandb agent --count $NUM your-entity/sweep-demo-cli/sweepID ``` -For more information on how to start sweep jobs, see [Start sweep jobs](./start-sweep-agents.md). +For more information on how to start sweep jobs, see [Start sweep jobs](./start-sweep-agents/). {{% /tab %}} {{< /tabpane >}} diff --git a/content/guides/models/sweeps/define-sweep-configuration/_index.md b/content/guides/models/sweeps/define-sweep-configuration/_index.md index 6b59e69cd..a237ff888 100644 --- a/content/guides/models/sweeps/define-sweep-configuration/_index.md +++ b/content/guides/models/sweeps/define-sweep-configuration/_index.md @@ -18,13 +18,13 @@ Define a sweep configuration either in a [Python dictionary](https://docs.python Define your sweep configuration in a YAML file if you want to initialize a sweep and start a sweep agent from the command line. Define your sweep in a Python dictionary if you initialize a sweep and start a sweep entirely within a Python script or Jupyter notebook. {{% /alert %}} -The following guide describes how to format your sweep configuration. See [Sweep configuration options](./sweep-config-keys.md) for a comprehensive list of top-level sweep configuration keys. +The following guide describes how to format your sweep configuration. See [Sweep configuration options](./sweep-config-keys/) for a comprehensive list of top-level sweep configuration keys. ## Basic structure Both sweep configuration format options (YAML and Python dictionary) utilize key-value pairs and nested structures. -Use top-level keys within your sweep configuration to define qualities of your sweep search such as the name of the sweep ([`name`](./sweep-config-keys.md) key), the parameters to search through ([`parameters`](./sweep-config-keys.md#parameters) key), the methodology to search the parameter space ([`method`](./sweep-config-keys.md#method) key), and more. +Use top-level keys within your sweep configuration to define qualities of your sweep search such as the name of the sweep ([`name`](./sweep-config-keys/) key), the parameters to search through ([`parameters`](./sweep-config-keys.md#parameters) key), the methodology to search the parameter space ([`method`](./sweep-config-keys.md#method) key), and more. For example, the proceeding code snippets show the same sweep configuration defined within a YAML file and within a Python dictionary. Within the sweep configuration there are five top level keys specified: `program`, `name`, `method`, `metric` and `parameters`. @@ -75,7 +75,7 @@ parameters: {{< /tabpane >}} -Within the top level `parameters` key, the following keys are nested: `learning_rate`, `batch_size`, `epoch`, and `optimizer`. For each of the nested keys you specify, you can provide one or more values, a distribution, a probability, and more. For more information, see the [parameters](./sweep-config-keys.md#parameters) section in [Sweep configuration options](./sweep-config-keys.md). +Within the top level `parameters` key, the following keys are nested: `learning_rate`, `batch_size`, `epoch`, and `optimizer`. For each of the nested keys you specify, you can provide one or more values, a distribution, a probability, and more. For more information, see the [parameters](./sweep-config-keys.md#parameters) section in [Sweep configuration options](./sweep-config-keys/). ## Double nested parameters @@ -375,7 +375,7 @@ command: {{% /tab %}} {{% tab header="Hydra" %}} -You can change the command to pass arguments the way tools like [Hydra](https://hydra.cc) expect. See [Hydra with W&B](../integrations/other/hydra.md) for more information. +You can change the command to pass arguments the way tools like [Hydra](https://hydra.cc) expect. See [Hydra with W&B](../integrations/other/hydra/) for more information. ```yaml command: diff --git a/content/guides/models/sweeps/define-sweep-configuration/sweep-config-keys.md b/content/guides/models/sweeps/define-sweep-configuration/sweep-config-keys.md index b79a6029e..9bb06125d 100644 --- a/content/guides/models/sweeps/define-sweep-configuration/sweep-config-keys.md +++ b/content/guides/models/sweeps/define-sweep-configuration/sweep-config-keys.md @@ -26,7 +26,7 @@ The proceeding table lists top-level sweep configuration keys and a brief descri | [`command`](#command) | Command structure for invoking and passing arguments to the training script | | `run_cap` | Maximum number of runs for this sweep | -See the [Sweep configuration](./sweep-config-keys.md) structure for more information on how to structure your sweep configuration. +See the [Sweep configuration](./sweep-config-keys/) structure for more information on how to structure your sweep configuration. -Bayesian search runs forever unless you stop the process from the command line, within your python script, or [the W&B App UI](./sweeps-ui.md). +Bayesian search runs forever unless you stop the process from the command line, within your python script, or [the W&B App UI](./sweeps-ui/). ### Distribution options for random and Bayesian search Within the `parameter` key, nest the name of the hyperparameter. Next, specify the `distribution` key and specify a distribution for the value. @@ -159,12 +159,12 @@ Specify either `min_iter` or `max_iter` to create a bracket schedule. {{% alert %}} -Hyperband checks which [W&B runs](../../ref/python/run.md) to end once every few minutes. The end run timestamp might differ from the specified brackets if your run or iteration are short. +Hyperband checks which [W&B runs](../../ref/python/run/) to end once every few minutes. The end run timestamp might differ from the specified brackets if your run or iteration are short. {{% /alert %}} ## `command` - + Modify the format and contents with nested values within the `command` key. You can directly include fixed components such as filenames. diff --git a/content/guides/models/sweeps/existing-project.md b/content/guides/models/sweeps/existing-project.md index 7d70f6aa1..6c5b353a4 100644 --- a/content/guides/models/sweeps/existing-project.md +++ b/content/guides/models/sweeps/existing-project.md @@ -25,7 +25,7 @@ Optionally explore the example appear in the W&B App UI dashboard. ## 2. Create a sweep -From your project page, open the [Sweep tab](./sweeps-ui.md) in the sidebar and select **Create Sweep**. +From your project page, open the [Sweep tab](./sweeps-ui/) in the sidebar and select **Create Sweep**. {{< img src="/images/sweeps/sweep1.png" alt="" >}} diff --git a/content/guides/models/sweeps/initialize-sweeps.md b/content/guides/models/sweeps/initialize-sweeps.md index 97c9dcd35..38db23261 100644 --- a/content/guides/models/sweeps/initialize-sweeps.md +++ b/content/guides/models/sweeps/initialize-sweeps.md @@ -13,8 +13,8 @@ W&B uses a _Sweep Controller_ to manage sweeps on the cloud (standard), locally The following code snippets demonstrate how to initialize sweeps with the CLI and within a Jupyter Notebook or Python script. {{% alert color="secondary" %}} -1. Before you initialize a sweep, make sure you have a sweep configuration defined either in a YAML file or a nested Python dictionary object in your script. For more information see, [Define sweep configuration](../../guides/sweeps/define-sweep-configuration.md). -2. Both the W&B Sweep and the W&B Run must be in the same project. Therefore, the name you provide when you initialize W&B ([`wandb.init`](../../ref/python/init.md)) must match the name of the project you provide when you initialize a W&B Sweep ([`wandb.sweep`](../../ref/python/sweep.md)). +1. Before you initialize a sweep, make sure you have a sweep configuration defined either in a YAML file or a nested Python dictionary object in your script. For more information see, [Define sweep configuration](../../guides/sweeps/define-sweep-configuration/). +2. Both the W&B Sweep and the W&B Run must be in the same project. Therefore, the name you provide when you initialize W&B ([`wandb.init`](../../ref/python/init/)) must match the name of the project you provide when you initialize a W&B Sweep ([`wandb.sweep`](../../ref/python/sweep/)). {{% /alert %}} @@ -48,7 +48,7 @@ The [`wandb.sweep`](../../ref/python/sweep) function returns the sweep ID. The s Use the W&B CLI to initialize a sweep. Provide the name of your configuration file. Optionally provide the name of the project for the `project` flag. If the project is not specified, the W&B Run is put in an "Uncategorized" project. -Use the [`wandb sweep`](../../ref/cli/wandb-sweep.md) command to initialize a sweep. The proceeding code example initializes a sweep for a `sweeps_demo` project and uses a `config.yaml` file for the configuration. +Use the [`wandb sweep`](../../ref/cli/wandb-sweep/) command to initialize a sweep. The proceeding code example initializes a sweep for a `sweeps_demo` project and uses a `config.yaml` file for the configuration. ```bash wandb sweep --project sweeps_demo config.yaml diff --git a/content/guides/models/sweeps/local-controller.md b/content/guides/models/sweeps/local-controller.md index bc7641fc1..aa9481d12 100644 --- a/content/guides/models/sweeps/local-controller.md +++ b/content/guides/models/sweeps/local-controller.md @@ -22,7 +22,7 @@ Before you get start, you must install the W&B SDK(`wandb`). Type the following pip install wandb sweeps ``` -The following examples assume you already have a configuration file and a training loop defined in a python script or Jupyter Notebook. For more information about how to define a configuration file, see [Define sweep configuration](./define-sweep-configuration.md). +The following examples assume you already have a configuration file and a training loop defined in a python script or Jupyter Notebook. For more information about how to define a configuration file, see [Define sweep configuration](./define-sweep-configuration/). ### Run the local controller from the command line @@ -47,14 +47,14 @@ Next, initialize the sweep: wandb sweep config.yaml ``` -After you initialized the sweep, start a controller with [`wandb controller`](../../ref/python/controller.md): +After you initialized the sweep, start a controller with [`wandb controller`](../../ref/python/controller/): ```bash # wandb sweep command will print a sweep_id wandb controller {user}/{entity}/{sweep_id} ``` -Once you have specified you want to use a local controller, start one or more Sweep agents to execute the sweep. Start a W&B Sweep similar to how you normally would. See [Start sweep agents](../../guides/sweeps/start-sweep-agents.md), for more information. +Once you have specified you want to use a local controller, start one or more Sweep agents to execute the sweep. Start a W&B Sweep similar to how you normally would. See [Start sweep agents](../../guides/sweeps/start-sweep-agents/), for more information. ```bash wandb sweep sweep_ID @@ -64,7 +64,7 @@ wandb sweep sweep_ID The following code snippets demonstrate how to specify and use a local controller with the W&B Python SDK. -The simplest way to use a controller with the Python SDK is to pass the sweep ID to the [`wandb.controller`](../../ref/python/controller.md) method. Next, use the return objects `run` method to start the sweep job: +The simplest way to use a controller with the Python SDK is to pass the sweep ID to the [`wandb.controller`](../../ref/python/controller/) method. Next, use the return objects `run` method to start the sweep job: ```python sweep = wandb.controller(sweep_id) diff --git a/content/guides/models/sweeps/parallelize-agents.md b/content/guides/models/sweeps/parallelize-agents.md index 48c516eb9..e19348cc6 100644 --- a/content/guides/models/sweeps/parallelize-agents.md +++ b/content/guides/models/sweeps/parallelize-agents.md @@ -9,7 +9,7 @@ weight: 6 --- -Parallelize your W&B Sweep agents on a multi-core or multi-GPU machine. Before you get started, ensure you have initialized your W&B Sweep. For more information on how to initialize a W&B Sweep, see [Initialize sweeps](./initialize-sweeps.md). +Parallelize your W&B Sweep agents on a multi-core or multi-GPU machine. Before you get started, ensure you have initialized your W&B Sweep. For more information on how to initialize a W&B Sweep, see [Initialize sweeps](./initialize-sweeps/). ### Parallelize on a multi-CPU machine @@ -18,7 +18,7 @@ Depending on your use case, explore the proceeding tabs to learn how to parallel {{< tabpane text=true >}} {{% tab header="CLI" %}} -Use the [`wandb agent`](../../ref/cli/wandb-agent.md) command to parallelize your W&B Sweep agent across multiple CPUs with the terminal. Provide the sweep ID that was returned when you [initialized the sweep](./initialize-sweeps.md). +Use the [`wandb agent`](../../ref/cli/wandb-agent/) command to parallelize your W&B Sweep agent across multiple CPUs with the terminal. Provide the sweep ID that was returned when you [initialized the sweep](./initialize-sweeps/). 1. Open more than one terminal window on your local machine. 2. Copy and paste the code snippet below and replace `sweep_id` with your sweep ID: @@ -28,7 +28,7 @@ wandb agent sweep_id ``` {{% /tab %}} {{% tab header="Jupyter Notebook" %}} -Use the W&B Python SDK library to parallelize your W&B Sweep agent across multiple CPUs within Jupyter Notebooks. Ensure you have the sweep ID that was returned when you [initialized the sweep](./initialize-sweeps.md). In addition, provide the name of the function the sweep will execute for the `function` parameter: +Use the W&B Python SDK library to parallelize your W&B Sweep agent across multiple CPUs within Jupyter Notebooks. Ensure you have the sweep ID that was returned when you [initialized the sweep](./initialize-sweeps/). In addition, provide the name of the function the sweep will execute for the `function` parameter: 1. Open more than one Jupyter Notebook. 2. Copy and past the W&B Sweep ID on multiple Jupyter Notebooks to parallelize a W&B Sweep. For example, you can paste the following code snippet on multiple jupyter notebooks to paralleliz your sweep if you have the sweep ID stored in a variable called `sweep_id` and the name of the function is `function_name`: @@ -46,7 +46,7 @@ wandb.agent(sweep_id=sweep_id, function=function_name) Follow the procedure outlined to parallelize your W&B Sweep agent across multiple GPUs with a terminal using CUDA Toolkit: 1. Open more than one terminal window on your local machine. -2. Specify the GPU instance to use with `CUDA_VISIBLE_DEVICES` when you start a W&B Sweep job ([`wandb agent`](../../ref/cli/wandb-agent.md)). Assign `CUDA_VISIBLE_DEVICES` an integer value corresponding to the GPU instance to use. +2. Specify the GPU instance to use with `CUDA_VISIBLE_DEVICES` when you start a W&B Sweep job ([`wandb agent`](../../ref/cli/wandb-agent/)). Assign `CUDA_VISIBLE_DEVICES` an integer value corresponding to the GPU instance to use. For example, suppose you have two NVIDIA GPUs on your local machine. Open a terminal window and set `CUDA_VISIBLE_DEVICES` to `0` (`CUDA_VISIBLE_DEVICES=0`). Replace `sweep_ID` in the proceeding example with the W&B Sweep ID that is returned when you initialized a W&B Sweep: diff --git a/content/guides/models/sweeps/pause-resume-and-cancel-sweeps.md b/content/guides/models/sweeps/pause-resume-and-cancel-sweeps.md index a66ecaa82..53b47e7c5 100644 --- a/content/guides/models/sweeps/pause-resume-and-cancel-sweeps.md +++ b/content/guides/models/sweeps/pause-resume-and-cancel-sweeps.md @@ -46,7 +46,7 @@ Cancel a sweep to kill all running runs and stop running new runs. Use the `wand wandb sweep --cancel entity/project/sweep_ID ``` -For a full list of CLI command options, see the [wandb sweep](../../ref/cli/wandb-sweep.md) CLI Reference Guide. +For a full list of CLI command options, see the [wandb sweep](../../ref/cli/wandb-sweep/) CLI Reference Guide. ### Pause, resume, stop, and cancel a sweep across multiple agents @@ -64,4 +64,4 @@ Specify the `--resume` flag along with the Sweep ID to resume the Sweep across y wandb sweep --resume entity/project/sweep_ID ``` -For more information on how to parallelize W&B agents, see [Parallelize agents](./parallelize-agents.md). \ No newline at end of file +For more information on how to parallelize W&B agents, see [Parallelize agents](./parallelize-agents/). \ No newline at end of file diff --git a/content/guides/models/sweeps/start-sweep-agents.md b/content/guides/models/sweeps/start-sweep-agents.md index ac80446c7..f61acd555 100644 --- a/content/guides/models/sweeps/start-sweep-agents.md +++ b/content/guides/models/sweeps/start-sweep-agents.md @@ -25,7 +25,7 @@ Where: Provide the name of the function the W&B Sweep will execute if you start a W&B Sweep agent within a Jupyter Notebook or Python script. -The proceeding code snippets demonstrate how to start an agent with W&B. We assume you already have a configuration file and you have already initialized a W&B Sweep. For more information about how to define a configuration file, see [Define sweep configuration](./define-sweep-configuration.md). +The proceeding code snippets demonstrate how to start an agent with W&B. We assume you already have a configuration file and you have already initialized a W&B Sweep. For more information about how to define a configuration file, see [Define sweep configuration](./define-sweep-configuration/). {{< tabpane text=true >}} {{% tab header="CLI" %}} @@ -49,14 +49,14 @@ wandb.agent(sweep_id=sweep_id, function=function_name) ### Stop W&B agent {{% alert color="secondary" %}} -Random and Bayesian searches will run forever. You must stop the process from the command line, within your python script, or the [Sweeps UI](./visualize-sweep-results.md). +Random and Bayesian searches will run forever. You must stop the process from the command line, within your python script, or the [Sweeps UI](./visualize-sweep-results/). {{% /alert %}} -Optionally specify the number of W&B Runs a Sweep agent should try. The following code snippets demonstrate how to set a maximum number of [W&B Runs](../../ref/python/run.md) with the CLI and within a Jupyter Notebook, Python script. +Optionally specify the number of W&B Runs a Sweep agent should try. The following code snippets demonstrate how to set a maximum number of [W&B Runs](../../ref/python/run/) with the CLI and within a Jupyter Notebook, Python script. {{< tabpane text=true >}} {{% tab header="Python script or notebook" %}} -First, initialize your sweep. For more information, see [Initialize sweeps](./initialize-sweeps.md). +First, initialize your sweep. For more information, see [Initialize sweeps](./initialize-sweeps/). ``` sweep_id = wandb.sweep(sweep_config) @@ -74,7 +74,7 @@ If you start a new run after the sweep agent has finished, within the same scrip {{% /alert %}} {{% /tab %}} {{% tab header="CLI" %}} -First, initialize your sweep with the [`wandb sweep`](../../ref/cli/wandb-sweep.md) command. For more information, see [Initialize sweeps](./initialize-sweeps.md). +First, initialize your sweep with the [`wandb sweep`](../../ref/cli/wandb-sweep/) command. For more information, see [Initialize sweeps](./initialize-sweeps/). ``` wandb sweep config.yaml diff --git a/content/guides/models/sweeps/troubleshoot-sweeps.md b/content/guides/models/sweeps/troubleshoot-sweeps.md index 765a700e9..c429dc399 100644 --- a/content/guides/models/sweeps/troubleshoot-sweeps.md +++ b/content/guides/models/sweeps/troubleshoot-sweeps.md @@ -60,7 +60,7 @@ Navigate to your CLI and initialize a W&B Sweep with wandb sweep: wandb sweep config.yaml ``` -Make a note of the W&B Sweep ID that is returned. Next, start the Sweep job with [`wandb agent`](../../ref/cli/wandb-agent.md) with the CLI instead of the Python SDK ([`wandb.agent`](../../ref/python/agent.md)). Replace `sweep_ID` in the code snippet below with the Sweep ID that was returned in the previous step: +Make a note of the W&B Sweep ID that is returned. Next, start the Sweep job with [`wandb agent`](../../ref/cli/wandb-agent/) with the CLI instead of the Python SDK ([`wandb.agent`](../../ref/python/agent/)). Replace `sweep_ID` in the code snippet below with the Sweep ID that was returned in the previous step: ```shell wandb agent sweep_ID @@ -75,4 +75,4 @@ wandb: ERROR Error while calling W&B API: anaconda 400 error: {"code": 400, "message": "TypeError: bad operand type for unary -: 'NoneType'"} ``` -Within your YAML file or nested dictionary you specify a key named "metric" to optimize. Ensure that you log (`wandb.log`) this metric. In addition, ensure you use the _exact_ metric name that you defined the sweep to optimize within your Python script or Jupyter Notebook. For more information about configuration files, see [Define sweep configuration](./define-sweep-configuration.md). \ No newline at end of file +Within your YAML file or nested dictionary you specify a key named "metric" to optimize. Ensure that you log (`wandb.log`) this metric. In addition, ensure you use the _exact_ metric name that you defined the sweep to optimize within your Python script or Jupyter Notebook. For more information about configuration files, see [Define sweep configuration](./define-sweep-configuration/). \ No newline at end of file diff --git a/content/guides/models/sweeps/useful-resources.md b/content/guides/models/sweeps/useful-resources.md index b0fa3fb71..840291481 100644 --- a/content/guides/models/sweeps/useful-resources.md +++ b/content/guides/models/sweeps/useful-resources.md @@ -37,4 +37,4 @@ The following how-to-guide demonstrates how to solve real-world problems with W& ### Sweep GitHub repository -W&B advocates open source and welcome contributions from the community. Find the GitHub repository at [https://github.com/wandb/sweeps](https://github.com/wandb/sweeps). For information on how to contribute to the W&B open source repo, see the W&B GitHub [Contribution guidelines](https://github.com/wandb/wandb/blob/master/CONTRIBUTING.md). \ No newline at end of file +W&B advocates open source and welcome contributions from the community. Find the GitHub repository at [https://github.com/wandb/sweeps](https://github.com/wandb/sweeps). For information on how to contribute to the W&B open source repo, see the W&B GitHub [Contribution guidelines](https://github.com/wandb/wandb/blob/master/CONTRIBUTING/). \ No newline at end of file diff --git a/content/guides/models/sweeps/visualize-sweep-results.md b/content/guides/models/sweeps/visualize-sweep-results.md index 92764a284..9ef5f32d2 100644 --- a/content/guides/models/sweeps/visualize-sweep-results.md +++ b/content/guides/models/sweeps/visualize-sweep-results.md @@ -8,23 +8,23 @@ title: Visualize sweep results weight: 7 --- -Visualize the results of your W&B Sweeps with the W&B App UI. Navigate to the W&B App UI at [https://wandb.ai/home](https://wandb.ai/home). Choose the project that you specified when you initialized a W&B Sweep. You will be redirected to your project [workspace](../track/workspaces.md). Select the **Sweep icon** on the left panel (broom icon). From the [Sweep UI](./visualize-sweep-results.md), select the name of your Sweep from the list. +Visualize the results of your W&B Sweeps with the W&B App UI. Navigate to the W&B App UI at [https://wandb.ai/home](https://wandb.ai/home). Choose the project that you specified when you initialized a W&B Sweep. You will be redirected to your project [workspace](../track/workspaces/). Select the **Sweep icon** on the left panel (broom icon). From the [Sweep UI](./visualize-sweep-results/), select the name of your Sweep from the list. By default, W&B will automatically create a parallel coordinates plot, a parameter importance plot, and a scatter plot when you start a W&B Sweep job. {{< img src="/images/sweeps/navigation_sweeps_ui.gif" alt="Animation that shows how to navigate to the Sweep UI interface and view autogenerated plots." >}} -Parallel coordinates charts summarize the relationship between large numbers of hyperparameters and model metrics at a glance. For more information on parallel coordinates plots, see [Parallel coordinates](../app/features/panels/parallel-coordinates.md). +Parallel coordinates charts summarize the relationship between large numbers of hyperparameters and model metrics at a glance. For more information on parallel coordinates plots, see [Parallel coordinates](../app/features/panels/parallel-coordinates/). {{< img src="/images/sweeps/example_parallel_coordiantes_plot.png" alt="Example parallel coordinates plot." >}} -The scatter plot(left) compares the W&B Runs that were generated during the Sweep. For more information about scatter plots, see [Scatter Plots](../app/features/panels/scatter-plot.md). +The scatter plot(left) compares the W&B Runs that were generated during the Sweep. For more information about scatter plots, see [Scatter Plots](../app/features/panels/scatter-plot/). -The parameter importance plot(right) lists the hyperparameters that were the best predictors of, and highly correlated to desirable values of your metrics. For more information parameter importance plots, see [Parameter Importance](../app/features/panels/parameter-importance.md). +The parameter importance plot(right) lists the hyperparameters that were the best predictors of, and highly correlated to desirable values of your metrics. For more information parameter importance plots, see [Parameter Importance](../app/features/panels/parameter-importance/). {{< img src="/images/sweeps/scatter_and_parameter_importance.png" alt="Example scatter plot (left) and parameter importance plot (right)." >}} You can alter the dependent and independent values (x and y axis) that are automatically used. Within each panel there is a pencil icon called **Edit panel**. Choose **Edit panel**. A model will appear. Within the modal, you can alter the behavior of the graph. -For more information on all default W&B visualization options, see [Panels](../app/features/panels/intro.md). See the [Data Visualization docs](../tables/intro.md) for information on how to create plots from W&B Runs that are not part of a W&B Sweep. \ No newline at end of file +For more information on all default W&B visualization options, see [Panels](../app/features/panels/intro/). See the [Data Visualization docs](../tables/intro/) for information on how to create plots from W&B Runs that are not part of a W&B Sweep. \ No newline at end of file diff --git a/content/guides/models/sweeps/walkthrough.md b/content/guides/models/sweeps/walkthrough.md index de2f6eaa1..3db58b511 100644 --- a/content/guides/models/sweeps/walkthrough.md +++ b/content/guides/models/sweeps/walkthrough.md @@ -57,7 +57,7 @@ The following sections break down and explains each step in the code sample. ## Set up your training code Define a training function that takes in hyperparameter values from `wandb.config` and uses them to train a model and return metrics. -Optionally provide the name of the project where you want the output of the W&B Run to be stored (project parameter in [`wandb.init`](../../ref/python/init.md)). If the project is not specified, the run is put in an "Uncategorized" project. +Optionally provide the name of the project where you want the output of the W&B Run to be stored (project parameter in [`wandb.init`](../../ref/python/init/)). If the project is not specified, the run is put in an "Uncategorized" project. {{% alert %}} Both the sweep and the run must be in the same project. Therefore, the name you provide when you initialize W&B must match the name of the project you provide when you initialize a sweep. @@ -77,7 +77,7 @@ def main(): ``` ## Define the search space with a sweep configuration -Within a dictionary, specify what hyperparameters you want to sweep over and. For more information about configuration options, see [Define sweep configuration](./define-sweep-configuration.md). +Within a dictionary, specify what hyperparameters you want to sweep over and. For more information about configuration options, see [Define sweep configuration](./define-sweep-configuration/). The proceeding example demonstrates a sweep configuration that uses a random search (`'method':'random'`). The sweep will randomly select a random set of values listed in the configuration for the batch size, epoch, and the learning rate. @@ -98,7 +98,7 @@ sweep_configuration = { ## Initialize the Sweep -W&B uses a _Sweep Controller_ to manage sweeps on the cloud (standard), locally (local) across one or more machines. For more information about Sweep Controllers, see [Search and stop algorithms locally](./local-controller.md). +W&B uses a _Sweep Controller_ to manage sweeps on the cloud (standard), locally (local) across one or more machines. For more information about Sweep Controllers, see [Search and stop algorithms locally](./local-controller/). A sweep identification number is returned when you initialize a sweep: @@ -106,11 +106,11 @@ A sweep identification number is returned when you initialize a sweep: sweep_id = wandb.sweep(sweep=sweep_configuration, project="my-first-sweep") ``` -For more information about initializing sweeps, see [Initialize sweeps](./initialize-sweeps.md). +For more information about initializing sweeps, see [Initialize sweeps](./initialize-sweeps/). ## Start the Sweep -Use the [`wandb.agent`](../../ref/python/agent.md) API call to start a sweep. +Use the [`wandb.agent`](../../ref/python/agent/) API call to start a sweep. ```python wandb.agent(sweep_id, function=main, count=10) @@ -118,11 +118,11 @@ wandb.agent(sweep_id, function=main, count=10) ## Visualize results (optional) -Open your project to see your live results in the W&B App dashboard. With just a few clicks, construct rich, interactive charts like [parallel coordinates plots](../app/features/panels/parallel-coordinates.md),[ parameter importance analyzes](../app/features/panels/parameter-importance.md), and [more](../app/features/panels/intro.md). +Open your project to see your live results in the W&B App dashboard. With just a few clicks, construct rich, interactive charts like [parallel coordinates plots](../app/features/panels/parallel-coordinates/),[ parameter importance analyzes](../app/features/panels/parameter-importance/), and [more](../app/features/panels/intro/). {{< img src="/images/sweeps/quickstart_dashboard_example.png" alt="Sweeps Dashboard example" >}} -For more information about how to visualize results, see [Visualize sweep results](./visualize-sweep-results.md). For an example dashboard, see this sample [Sweeps Project](https://wandb.ai/anmolmann/pytorch-cnn-fashion/sweeps/pmqye6u3). +For more information about how to visualize results, see [Visualize sweep results](./visualize-sweep-results/). For an example dashboard, see this sample [Sweeps Project](https://wandb.ai/anmolmann/pytorch-cnn-fashion/sweeps/pmqye6u3). ## Stop the agent (optional) diff --git a/content/guides/models/track/_index.md b/content/guides/models/track/_index.md index 71f623f1e..4ec4dc975 100644 --- a/content/guides/models/track/_index.md +++ b/content/guides/models/track/_index.md @@ -12,20 +12,20 @@ cascade: --- {{< cta-button productLink="https://wandb.ai/stacey/deep-drive/workspace?workspace=user-lavanyashukla" colabLink="https://colab.research.google.com/github/wandb/examples/blob/master/colabs/intro/Intro_to_Weights_%26_Biases.ipynb" >}} -Track machine learning experiments with a few lines of code. You can then review the results in an [interactive dashboard](../track/workspaces.md) or export your data to Python for programmatic access using our [Public API](../../ref/python/public-api/README.md). +Track machine learning experiments with a few lines of code. You can then review the results in an [interactive dashboard](../track/workspaces/) or export your data to Python for programmatic access using our [Public API](../../ref/python/public-api/README/). -Utilize W&B Integrations if you use popular frameworks such as [PyTorch](../integrations/pytorch.md), [Keras](../integrations/keras.md), or [Scikit](../integrations/scikit.md). See our [Integration guides](../integrations/intro.md) for a for a full list of integrations and information on how to add W&B to your code. +Utilize W&B Integrations if you use popular frameworks such as [PyTorch](../integrations/pytorch/), [Keras](../integrations/keras/), or [Scikit](../integrations/scikit/). See our [Integration guides](../integrations/intro/) for a for a full list of integrations and information on how to add W&B to your code. {{< img src="/images/experiments/experiments_landing_page.png" alt="" >}} -The image above shows an example dashboard where you can view and compare metrics across multiple [runs](../runs/intro.md). +The image above shows an example dashboard where you can view and compare metrics across multiple [runs](../runs/intro/). ## How it works Track a machine learning experiment with a few lines of code: -1. Create a [W&B run](../runs/intro.md). -2. Store a dictionary of hyperparameters, such as learning rate or model type, into your configuration ([`wandb.config`](./config.md)). -3. Log metrics ([`wandb.log()`](./log/intro.md)) over time in a training loop, such as accuracy and loss. +1. Create a [W&B run](../runs/intro/). +2. Store a dictionary of hyperparameters, such as learning rate or model type, into your configuration ([`wandb.config`](./config/)). +3. Log metrics ([`wandb.log()`](./log/intro/)) over time in a training loop, such as accuracy and loss. 4. Save outputs of a run, like the model weights or a table of predictions. The proceeding pseudocode demonstrates a common W&B Experiment tracking workflow: @@ -53,10 +53,10 @@ wandb.log_artifact(model) Depending on your use case, explore the following resources to get started with W&B Experiments: -* Read the [W&B Quickstart](../../quickstart.md) for a step-by-step outline of the W&B Python SDK commands you could use to create, track, and use a dataset artifact. +* Read the [W&B Quickstart](../../quickstart/) for a step-by-step outline of the W&B Python SDK commands you could use to create, track, and use a dataset artifact. * Explore this chapter to learn how to: * Create an experiment * Configure experiments * Log data from experiments * View results from experiments -* Explore the [W&B Python Library](../../ref/python/README.md) within the [W&B API Reference Guide](../../ref/README.md). \ No newline at end of file +* Explore the [W&B Python Library](../../ref/python/README/) within the [W&B API Reference Guide](../../ref/README/). \ No newline at end of file diff --git a/content/guides/models/track/config.md b/content/guides/models/track/config.md index 1ccdf0014..4e5028a9f 100644 --- a/content/guides/models/track/config.md +++ b/content/guides/models/track/config.md @@ -179,9 +179,9 @@ wandb.config.update({"lr": 0.1, "channels": 16}) ``` ### Set the configuration after your Run has finished -Use the [W&B Public API](../../ref/python/public-api/README.md) to update your config (or anything else about from a complete Run) after your Run. This is particularly useful if you forgot to log a value during a Run. +Use the [W&B Public API](../../ref/python/public-api/README/) to update your config (or anything else about from a complete Run) after your Run. This is particularly useful if you forgot to log a value during a Run. -Provide your `entity`, `project name`, and the `Run ID` to update your configuration after a Run has finished. Find these values directly from the Run object itself `wandb.run` or from the [W&B App UI](../track/workspaces.md): +Provide your `entity`, `project name`, and the `Run ID` to update your configuration after a Run has finished. Find these values directly from the Run object itself `wandb.run` or from the [W&B App UI](../track/workspaces/): ```python api = wandb.Api() diff --git a/content/guides/models/track/environment-variables.md b/content/guides/models/track/environment-variables.md index 09d8a9fcc..86d5ad36e 100644 --- a/content/guides/models/track/environment-variables.md +++ b/content/guides/models/track/environment-variables.md @@ -37,7 +37,7 @@ Use these optional environment variables to do things like set up authentication | --------------------------- | ---------- | | **WANDB_ANONYMOUS** | Set this to `allow`, `never`, or `must` to let users create anonymous runs with secret urls. | | **WANDB_API_KEY** | Sets the authentication key associated with your account. You can find your key on [your settings page](https://app.wandb.ai/settings). This must be set if `wandb login` hasn't been run on the remote machine. | -| **WANDB_BASE_URL** | If you're using [wandb/local](../hosting/intro.md) you should set this environment variable to `http://YOUR_IP:YOUR_PORT` | +| **WANDB_BASE_URL** | If you're using [wandb/local](../hosting/intro/) you should set this environment variable to `http://YOUR_IP:YOUR_PORT` | | **WANDB_CACHE_DIR** | This defaults to \~/.cache/wandb, you can override this location with this environment variable | | **WANDB_CONFIG_DIR** | This defaults to \~/.config/wandb, you can override this location with this environment variable | | **WANDB_CONFIG_PATHS** | Comma separated list of yaml files to load into wandb.config. See [config](./config.md#file-based-configs). | @@ -51,14 +51,14 @@ Use these optional environment variables to do things like set up authentication | **WANDB_HOST** | Set this to the hostname you want to see in the wandb interface if you don't want to use the system provided hostname | | **WANDB_IGNORE_GLOBS** | Set this to a comma separated list of file globs to ignore. These files will not be synced to the cloud. | | **WANDB_JOB_NAME** | Specify a name for any jobs created by `wandb`. | -| **WANDB_JOB_TYPE** | Specify the job type, like "training" or "evaluation" to indicate different types of runs. See [grouping](../runs/grouping.md) for more info. | +| **WANDB_JOB_TYPE** | Specify the job type, like "training" or "evaluation" to indicate different types of runs. See [grouping](../runs/grouping/) for more info. | | **WANDB_MODE** | If you set this to "offline" wandb will save your run metadata locally and not sync to the server. If you set this to `disabled` wandb will turn off completely. | | **WANDB_NAME** | The human-readable name of your run. If not set it will be randomly generated for you | | **WANDB_NOTEBOOK_NAME** | If you're running in jupyter you can set the name of the notebook with this variable. We attempt to auto detect this. | | **WANDB_NOTES** | Longer notes about your run. Markdown is allowed and you can edit this later in the UI. | | **WANDB_PROJECT** | The project associated with your run. This can also be set with `wandb init`, but the environmental variable will override the value. | | **WANDB_RESUME** | By default this is set to _never_. If set to _auto_ wandb will automatically resume failed runs. If set to _must_ forces the run to exist on startup. If you want to always generate your own unique ids, set this to _allow_ and always set **WANDB_RUN_ID**. | -| **WANDB_RUN_GROUP** | Specify the experiment name to automatically group runs together. See [grouping](../runs/grouping.md) for more info. | +| **WANDB_RUN_GROUP** | Specify the experiment name to automatically group runs together. See [grouping](../runs/grouping/) for more info. | | **WANDB_RUN_ID** | Set this to a globally unique string (per project) corresponding to a single run of your script. It must be no longer than 64 characters. All non-word characters will be converted to dashes. This can be used to resume an existing run in cases of failure. | | **WANDB_SILENT** | Set this to **true** to silence wandb log statements. If this is set all logs will be written to **WANDB_DIR**/debug.log | | **WANDB_SHOW_RUN** | Set this to **true** to automatically open a browser with the run url if your operating system supports it. | diff --git a/content/guides/models/track/jupyter.md b/content/guides/models/track/jupyter.md index 6b90df9df..17e45e7e8 100644 --- a/content/guides/models/track/jupyter.md +++ b/content/guides/models/track/jupyter.md @@ -14,7 +14,7 @@ Use W&B with Jupyter to get interactive visualizations without leaving your note ## Use cases for W&B with Jupyter notebooks 1. **Iterative experimentation**: Run and re-run experiments, tweaking parameters, and have all the runs you do saved automatically to W&B without having to take manual notes along the way. -2. **Code saving**: When reproducing a model, it's hard to know which cells in a notebook ran, and in which order. Turn on code saving on your [settings page](../app/settings-page/intro.md) to save a record of cell execution for each experiment. +2. **Code saving**: When reproducing a model, it's hard to know which cells in a notebook ran, and in which order. Turn on code saving on your [settings page](../app/settings-page/intro/) to save a record of cell execution for each experiment. 3. **Custom analysis**: Once runs are logged to W&B, it's easy to get a dataframe from the API and do custom analysis, then log those results to W&B to save and share in reports. ## Getting started in a notebook @@ -80,7 +80,7 @@ wandb.run ``` {{% alert %}} -Want to know more about what you can do with W&B? Check out our [guide to logging data and media](log/intro.md), learn [how to integrate us with your favorite ML toolkits](../integrations/intro.md), or just dive straight into the [reference docs](../../ref/python/README.md) or our [repo of examples](https://github.com/wandb/examples). +Want to know more about what you can do with W&B? Check out our [guide to logging data and media](log/intro/), learn [how to integrate us with your favorite ML toolkits](../integrations/intro/), or just dive straight into the [reference docs](../../ref/python/README/) or our [repo of examples](https://github.com/wandb/examples). {{% /alert %}} ## Additional Jupyter features in W&B diff --git a/content/guides/models/track/launch.md b/content/guides/models/track/launch.md index 03c49c847..0b0a98632 100644 --- a/content/guides/models/track/launch.md +++ b/content/guides/models/track/launch.md @@ -8,7 +8,7 @@ weight: 1 title: Create an experiment --- -Use the W&B Python SDK to track machine learning experiments. You can then review the results in an interactive dashboard or export your data to Python for programmatic access with the [W&B Public API](../../ref/python/public-api/README.md). +Use the W&B Python SDK to track machine learning experiments. You can then review the results in an interactive dashboard or export your data to Python for programmatic access with the [W&B Public API](../../ref/python/public-api/README/). This guide describes how to use W&B building blocks to create a W&B Experiment. @@ -22,7 +22,7 @@ Create a W&B Experiment in four steps: 4. [Log an artifact to W&B](#log-an-artifact-to-wb) ### Initialize a W&B run -At the beginning of your script call, the [`wandb.init()`](../../ref/python/init.md) API to generate a background process to sync and log data as a W&B Run. +At the beginning of your script call, the [`wandb.init()`](../../ref/python/init/) API to generate a background process to sync and log data as a W&B Run. The proceeding code snippet demonstrates how to create a new W&B project named `“cat-classification”`. A note `“My first experiment”` was added to help identify this run. Tags `“baseline”` and `“paper1”` are included to remind us that this run is a baseline experiment intended for a future paper publication. @@ -37,7 +37,7 @@ run = wandb.init( tags=["baseline", "paper1"], ) ``` -A [Run](../../ref/python/run.md) object is returned when you initialize W&B with `wandb.init()`. Additionally, W&B creates a local directory where all logs and files are saved and streamed asynchronously to a W&B server. +A [Run](../../ref/python/run/) object is returned when you initialize W&B with `wandb.init()`. Additionally, W&B creates a local directory where all logs and files are saved and streamed asynchronously to a W&B server. {{% alert %}} Note: Runs are added to pre-existing projects if that project already exists when you call wandb.init(). For example, if you already have a project called `“cat-classification”`, that project will continue to exist and not be deleted. Instead, a new run is added to that project. @@ -50,10 +50,10 @@ Save a dictionary of hyperparameters such as learning rate or model type. The mo #  2. Capture a dictionary of hyperparameters wandb.config = {"epochs": 100, "learning_rate": 0.001, "batch_size": 128} ``` -For more information on how to configure an experiment, see [Configure Experiments](./config.md). +For more information on how to configure an experiment, see [Configure Experiments](./config/). ### Log metrics inside your training loop -Log metrics during each `for` loop (epoch), the accuracy and loss values are computed and logged to W&B with [`wandb.log()`](../../ref/python/log.md). By default, when you call wandb.log it appends a new step to the history object and updates the summary object. +Log metrics during each `for` loop (epoch), the accuracy and loss values are computed and logged to W&B with [`wandb.log()`](../../ref/python/log/). By default, when you call wandb.log it appends a new step to the history object and updates the summary object. The following code example shows how to log metrics with `wandb.log`. @@ -72,14 +72,14 @@ for epoch in range(wandb.config.epochs): # model performance wandb.log({"accuracy": accuracy, "loss": loss}) ``` -For more information on different data types you can log with W&B, see [Log Data During Experiments](./log/intro.md). +For more information on different data types you can log with W&B, see [Log Data During Experiments](./log/intro/). ### Log an artifact to W&B Optionally log a W&B Artifact. Artifacts make it easy to version datasets and models. ```python wandb.log_artifact(model) ``` -For more information about Artifacts, see the [Artifacts Chapter](../artifacts/intro.md). For more information about versioning models, see [Model Management](../model_registry/intro.md). +For more information about Artifacts, see the [Artifacts Chapter](../artifacts/intro/). For more information about versioning models, see [Model Management](../model_registry/intro/). ### Putting it all together @@ -113,11 +113,11 @@ wandb.save("model.onnx") ``` ## Next steps: Visualize your experiment -Use the W&B Dashboard as a central place to organize and visualize results from your machine learning models. With just a few clicks, construct rich, interactive charts like [parallel coordinates plots](../app/features/panels/parallel-coordinates.md),[ parameter importance analyzes](../app/features/panels/parameter-importance.md), and [more](../app/features/panels/intro.md). +Use the W&B Dashboard as a central place to organize and visualize results from your machine learning models. With just a few clicks, construct rich, interactive charts like [parallel coordinates plots](../app/features/panels/parallel-coordinates/),[ parameter importance analyzes](../app/features/panels/parameter-importance/), and [more](../app/features/panels/intro/). {{< img src="/images/sweeps/quickstart_dashboard_example.png" alt="Quickstart Sweeps Dashboard example" >}} -For more information on how to view experiments and specific runs, see [Visualize results from experiments](../track/workspaces.md). +For more information on how to view experiments and specific runs, see [Visualize results from experiments](../track/workspaces/). ## Best practices @@ -145,4 +145,4 @@ wandb.init( ) ``` -For more information about available parameters when defining a W&B Experiment, see the [`wandb.init`](../../ref/python/init.md) API docs in the [API Reference Guide](../../ref/python/README.md). \ No newline at end of file +For more information about available parameters when defining a W&B Experiment, see the [`wandb.init`](../../ref/python/init/) API docs in the [API Reference Guide](../../ref/python/README/). \ No newline at end of file diff --git a/content/guides/models/track/limits.md b/content/guides/models/track/limits.md index 00da7c5e4..347e198f0 100644 --- a/content/guides/models/track/limits.md +++ b/content/guides/models/track/limits.md @@ -117,7 +117,7 @@ for step in range(1000000): ) # Commit batched, per-step metrics together ``` - + {{% alert %}} W&B continues to accept your logged data but pages may load more slowly if you exceed guidelines. @@ -154,7 +154,7 @@ with f as open("large_config.json", "r"): For faster loading times, keep the total number of runs in a single project under 10,000. Large run counts can slow down project workspaces and runs table operations, especially when grouping is enabled or runs have a large count of distinct metrics. -If you find that you or your team are frequently accessing the same set of runs (for example, recent runs), consider [bulk moving _other_ runs](../runs/manage-runs.md) to a new project used as an archive, leaving a smaller set of runs in your working project. +If you find that you or your team are frequently accessing the same set of runs (for example, recent runs), consider [bulk moving _other_ runs](../runs/manage-runs/) to a new project used as an archive, leaving a smaller set of runs in your working project. ### Section count @@ -202,7 +202,7 @@ The preceding table describes rate limit HTTP headers: ### Rate limits on metric logging API -The `wandb.log` calls in your script utilize a metrics logging API to log your training data to W&B. This API is engaged through either online or [offline syncing](../../ref/cli/wandb-sync.md). In either case, it imposes a rate limit quota limit in a rolling time window. This includes limits on total request size and request rate, where latter refers to the number of requests in a time duration. +The `wandb.log` calls in your script utilize a metrics logging API to log your training data to W&B. This API is engaged through either online or [offline syncing](../../ref/cli/wandb-sync/). In either case, it imposes a rate limit quota limit in a rolling time window. This includes limits on total request size and request rate, where latter refers to the number of requests in a time duration. W&B applies rate limits per W&B project. So if you have 3 projects in a team, each project has its own rate limit quota. Users on [Teams and Enterprise plans](https://wandb.ai/site/pricing) have higher rate limits than those on the Free plan. @@ -221,7 +221,7 @@ if epoch % 5 == 0: # Log metrics every 5 epochs wandb.log({"acc": accuracy, "loss": loss}) ``` -- Manual data syncing: W&B store your run data locally if you are rate limited. You can manually sync your data with the command `wandb sync `. For more details, see the [`wandb sync`](../../ref/cli/wandb-sync.md) reference. +- Manual data syncing: W&B store your run data locally if you are rate limited. You can manually sync your data with the command `wandb sync `. For more details, see the [`wandb sync`](../../ref/cli/wandb-sync/) reference. ### Rate limits on GraphQL API diff --git a/content/guides/models/track/log/_index.md b/content/guides/models/track/log/_index.md index 6c171ac60..ac433afca 100644 --- a/content/guides/models/track/log/_index.md +++ b/content/guides/models/track/log/_index.md @@ -11,13 +11,13 @@ cascade: - url: guides/track/log/:filename --- -Log a dictionary of metrics, media, or custom objects to a step with the W&B Python SDK. W&B collects the key-value pairs during each step and stores them in one unified dictionary each time you log data with `wandb.log()`. Data logged from your script is saved locally to your machine in a directory called `wandb`, then synced to the W&B cloud or your [private server](../../hosting/intro.md). +Log a dictionary of metrics, media, or custom objects to a step with the W&B Python SDK. W&B collects the key-value pairs during each step and stores them in one unified dictionary each time you log data with `wandb.log()`. Data logged from your script is saved locally to your machine in a directory called `wandb`, then synced to the W&B cloud or your [private server](../../hosting/intro/). {{% alert %}} Key-value pairs are stored in one unified dictionary only if you pass the same value for each step. W&B writes all of the collected keys and values to memory if you log a different value for `step`. {{% /alert %}} -Each call to `wandb.log` is a new `step` by default. W&B uses steps as the default x-axis when it creates charts and panels. You can optionally create and use a custom x-axis or capture a custom summary metric. For more information, see [Customize log axes](./customize-logging-axes.md). +Each call to `wandb.log` is a new `step` by default. W&B uses steps as the default x-axis when it creates charts and panels. You can optionally create and use a custom x-axis or capture a custom summary metric. For more information, see [Customize log axes](./customize-logging-axes/). ## How does W&B work? Read the following sections in this order if you are a first-time user of W&B and you are interested in training, tracking, and visualizing machine learning models and experiments: -1. Learn about [runs](./runs/intro/), W&B's basic unit of computation. -2. Create and track machine learning experiments with [Experiments](./track/intro/). -3. Discover W&B's flexible and lightweight building block for dataset and model versioning with [Artifacts](./artifacts/intro/). -4. Automate hyperparameter search and explore the space of possible models with [Sweeps](./sweeps/intro/). -5. Manage the model lifecycle from training to production with [Model Registry](./model_registry/intro/). -6. Visualize predictions across model versions with our [Data Visualization](./tables/intro/) guide. -7. Organize runs, embed and automate visualizations, describe your findings, and share updates with collaborators with [Reports](./reports/intro/). +1. Learn about [runs](./runs/), W&B's basic unit of computation. +2. Create and track machine learning experiments with [Experiments](./track/). +3. Discover W&B's flexible and lightweight building block for dataset and model versioning with [Artifacts](./artifacts/). +4. Automate hyperparameter search and explore the space of possible models with [Sweeps](./sweeps/). +5. Manage the model lifecycle from training to production with [Model Registry](./model_registry/). +6. Visualize predictions across model versions with our [Data Visualization](./tables/) guide. +7. Organize runs, embed and automate visualizations, describe your findings, and share updates with collaborators with [Reports](./reports/). diff --git a/content/guides/core/artifacts/_index.md b/content/guides/core/artifacts/_index.md index f57931856..151083f34 100644 --- a/content/guides/core/artifacts/_index.md +++ b/content/guides/core/artifacts/_index.md @@ -15,10 +15,10 @@ weight: 1 {{< cta-button productLink="https://wandb.ai/wandb/arttest/artifacts/model/iv3_trained/5334ab69740f9dda4fed/lineage" colabLink="https://colab.research.google.com/github/wandb/examples/blob/master/colabs/wandb-artifacts/Artifact_fundamentals.ipynb" >}} -Use W&B Artifacts to track and version data as the inputs and outputs of your [W&B Runs](../runs/intro/). For example, a model training run might take in a dataset as input and produce a trained model as output. You can log hyperparameters, metadatra, and metrics to a run, and you can use an artifact to log, track, and version the dataset used to train the model as input and another artifact for the resulting model checkpoints as output. +Use W&B Artifacts to track and version data as the inputs and outputs of your [W&B Runs](../runs/). For example, a model training run might take in a dataset as input and produce a trained model as output. You can log hyperparameters, metadatra, and metrics to a run, and you can use an artifact to log, track, and version the dataset used to train the model as input and another artifact for the resulting model checkpoints as output. ## Use cases -You can use artifacts throughout your entire ML workflow as inputs and outputs of [runs](../runs/intro/). You can use datasets, models, or even other artifacts as inputs for processing. +You can use artifacts throughout your entire ML workflow as inputs and outputs of [runs](../runs/). You can use datasets, models, or even other artifacts as inputs for processing. {{< img src="/images/artifacts/artifacts_landing_page2.png" >}} @@ -26,7 +26,7 @@ You can use artifacts throughout your entire ML workflow as inputs and outputs o |------------------------|-----------------------------|------------------------------| | Model Training | Dataset (training and validation data) | Trained Model | | Dataset Pre-Processing | Dataset (raw data) | Dataset (pre-processed data) | -| Model Evaluation | Model + Dataset (test data) | [W&B Table](../tables/intro/) | +| Model Evaluation | Model + Dataset (test data) | [W&B Table](../tables/) | | Model Optimization | Model | Optimized Model | @@ -37,7 +37,7 @@ The proceeding code snippets are meant to be run in order. ## Create an artifact Create an artifact with four lines of code: -1. Create a [W&B run](../runs/intro/). +1. Create a [W&B run](../runs/). 2. Create an artifact object with the [`wandb.Artifact`](../../ref/python/artifact/) API. 3. Add one or more files, such as a model file or dataset, to your artifact object. 4. Log your artifact to W&B. @@ -83,5 +83,5 @@ You can pass a custom path into the `root` [parameter](../../ref/python/artifact ## Next steps * Learn how to [version](./create-a-new-artifact-version/), [update](./update-an-artifact/), or [delete](./delete-artifacts/) artifacts. * Learn how to trigger downstream workflows in response to changes to your artifacts with [artifact automation](./project-scoped-automations/). -* Learn about the [model registry](../model_registry/intro/), a space that houses trained models. +* Learn about the [model registry](../model_registry/), a space that houses trained models. * Explore the [Python SDK](../../ref/python/artifact/) and [CLI](../../ref/cli/wandb-artifact/README/) reference guides. \ No newline at end of file diff --git a/content/guides/core/artifacts/artifacts-walkthrough.md b/content/guides/core/artifacts/artifacts-walkthrough.md index fad453afb..6727a4849 100644 --- a/content/guides/core/artifacts/artifacts-walkthrough.md +++ b/content/guides/core/artifacts/artifacts-walkthrough.md @@ -5,7 +5,7 @@ description: >- displayed_sidebar: default title: "Tutorial: Create, track, and use a dataset artifact" --- -This walkthrough demonstrates how to create, track, and use a dataset artifact from [W&B Runs](../runs/intro/). +This walkthrough demonstrates how to create, track, and use a dataset artifact from [W&B Runs](../runs/). ## 1. Log into W&B diff --git a/content/guides/core/artifacts/create-a-new-artifact-version.md b/content/guides/core/artifacts/create-a-new-artifact-version.md index 6e1799938..79930c6bf 100644 --- a/content/guides/core/artifacts/create-a-new-artifact-version.md +++ b/content/guides/core/artifacts/create-a-new-artifact-version.md @@ -9,7 +9,7 @@ title: Create an artifact version weight: 6 --- -Create a new artifact version with a single [run](../runs/intro/) or collaboratively with distributed runs. You can optionally create a new artifact version from a previous version known as an [incremental artifact](#create-a-new-artifact-version-from-an-existing-version). +Create a new artifact version with a single [run](../runs/) or collaboratively with distributed runs. You can optionally create a new artifact version from a previous version known as an [incremental artifact](#create-a-new-artifact-version-from-an-existing-version). {{% alert %}} We recommend that you create an incremental artifact when you need to apply changes to a subset of files in an artifact, where the size of the original artifact is significantly larger. diff --git a/content/guides/core/artifacts/manage-data/storage.md b/content/guides/core/artifacts/manage-data/storage.md index 5e139a120..f9c93b093 100644 --- a/content/guides/core/artifacts/manage-data/storage.md +++ b/content/guides/core/artifacts/manage-data/storage.md @@ -9,7 +9,7 @@ title: Manage artifact storage and memory allocation W&B stores artifact files in a private Google Cloud Storage bucket located in the United States by default. All files are encrypted at rest and in transit. -For sensitive files, we recommend you set up [Private Hosting](../hosting/intro/) or use [reference artifacts](./track-external-files/). +For sensitive files, we recommend you set up [Private Hosting](../hosting/) or use [reference artifacts](./track-external-files/). During training, W&B locally saves logs, artifacts, and configuration files in the following local directories: diff --git a/content/guides/core/reports/_index.md b/content/guides/core/reports/_index.md index e90ed2746..fab5f591c 100644 --- a/content/guides/core/reports/_index.md +++ b/content/guides/core/reports/_index.md @@ -12,7 +12,7 @@ cascade: --- -{{< cta-button productLink="https://wandb.ai/stacey/deep-drive/reports/The-View-from-the-Driver-s-Seat--Vmlldzo1MTg5NQ?utm_source=fully_connected&utm_medium=blog&utm_campaign=view+from+the+drivers+seat" colabLink="https://colab.research.google.com/github/wandb/examples/blob/master/colabs/intro/Report_API_Quickstart.ipynb" >}} +{{< cta-button productLink="https://wandb.ai/stacey/deep-drive/reports/The-View-from-the-Driver-s-Seat--Vmlldzo1MTg5NQ?utm_source=fully_connected&utm_medium=blog&utm_campaign=view+from+the+drivers+seat" colabLink="https://colab.research.google.com/github/wandb/examples/blob/master/colabs/Report_API_Quickstart.ipynb" >}} Use W&B Reports to: - Organize Runs. diff --git a/content/guides/core/reports/clone-and-export-reports.md b/content/guides/core/reports/clone-and-export-reports.md index 6af1c9502..c377499e3 100644 --- a/content/guides/core/reports/clone-and-export-reports.md +++ b/content/guides/core/reports/clone-and-export-reports.md @@ -26,7 +26,7 @@ Clone a report to reuse a project's template and format. Cloned projects are vis {{% tab header="Python SDK" value="python"%}} Load a Report from a URL to use it as a template. diff --git a/content/guides/core/reports/create-a-report.md b/content/guides/core/reports/create-a-report.md index 21bcd69cd..3ad6bbed1 100644 --- a/content/guides/core/reports/create-a-report.md +++ b/content/guides/core/reports/create-a-report.md @@ -11,7 +11,7 @@ weight: 10 Create a report interactively with the W&B App UI or programmatically with the W&B Python SDK. {{% alert %}} -See this [Google Colab for an example](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/intro/Report_API_Quickstart.ipynb). +See this [Google Colab for an example](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/Report_API_Quickstart.ipynb). {{% /alert %}} {{< tabpane text=true >}} diff --git a/content/guides/core/reports/cross-project-reports.md b/content/guides/core/reports/cross-project-reports.md index fc395c7fa..886f76614 100644 --- a/content/guides/core/reports/cross-project-reports.md +++ b/content/guides/core/reports/cross-project-reports.md @@ -24,7 +24,7 @@ Share a view-only link to a report that is in a private project or team project. {{< img src="/images/reports/magic-links.gif" alt="" >}} -View-only report links add a secret access token to the URL, so anyone who opens the link can view the page. Anyone can use the magic link to view the report without logging in first. For customers on [W&B Local](../hosting/intro/) private cloud installations, these links remain behind your firewall, so only members of your team with access to your private instance _and_ access to the view-only link can view the report. +View-only report links add a secret access token to the URL, so anyone who opens the link can view the page. Anyone can use the magic link to view the report without logging in first. For customers on [W&B Local](../hosting/) private cloud installations, these links remain behind your firewall, so only members of your team with access to your private instance _and_ access to the view-only link can view the report. In **view-only mode**, someone who is not logged in can see the charts and mouse over to see tooltips of values, zoom in and out on charts, and scroll through columns in the table. When in view mode, they cannot create new charts or new table queries to explore the data. View-only visitors to the report link won't be able to click a run to get to the run page. Also, the view-only visitors would not be able to see the share modal but instead would see a tooltip on hover which says: `Sharing not available for view only access`. diff --git a/content/guides/core/tables/_index.md b/content/guides/core/tables/_index.md index 0f3f8487c..65e0684f4 100644 --- a/content/guides/core/tables/_index.md +++ b/content/guides/core/tables/_index.md @@ -32,7 +32,7 @@ A Table is a two-dimensional grid of data where each column has a single type of Log a table with a few lines of code: -- [`wandb.init()`](../../ref/python/init/): Create a [run](../runs/intro/) to track results. +- [`wandb.init()`](../../ref/python/init/): Create a [run](../runs/) to track results. - [`wandb.Table()`](../../ref/python/data-types/table/): Create a new table object. - `columns`: Set the column names. - `data`: Set the contents of the table. diff --git a/content/guides/core/tables/tables-walkthrough.md b/content/guides/core/tables/tables-walkthrough.md index 44bd84595..e0c8d80be 100644 --- a/content/guides/core/tables/tables-walkthrough.md +++ b/content/guides/core/tables/tables-walkthrough.md @@ -18,7 +18,7 @@ Log a table with W&B. You can either construct a new table or pass a Pandas Data {{< tabpane text=true >}} {{% tab header="Construct a table" value="construct" %}} To construct and log a new Table, you will use: -- [`wandb.init()`](../../ref/python/init/): Create a [run](../runs/intro/) to track results. +- [`wandb.init()`](../../ref/python/init/): Create a [run](../runs/) to track results. - [`wandb.Table()`](../../ref/python/data-types/table/): Create a new table object. - `columns`: Set the column names. - `data`: Set the contents of each row. diff --git a/content/guides/hosting/iam/_index.md b/content/guides/hosting/iam/_index.md index 14843a292..c5a02660e 100644 --- a/content/guides/hosting/iam/_index.md +++ b/content/guides/hosting/iam/_index.md @@ -35,4 +35,4 @@ For more information, see [Add and manage teams](./manage-organization.md#add-an A *Project* is a subscope within a team, that maps to an actual AI project with specific intended outcomes. You may have more than one project within a team. Each project has a visibility mode which determines who can access it. -Every project is comprised of [Workspaces](../../track/workspaces/) and [Reports](../../reports/intro/), and is linked to relevant [Artifacts](../../artifacts/intro/), [Sweeps](../../sweeps/intro/), [Launch Jobs](../../launch/intro/) and [Automations](../../artifacts/project-scoped-automations/). \ No newline at end of file +Every project is comprised of [Workspaces](../../track/workspaces/) and [Reports](../../reports/), and is linked to relevant [Artifacts](../../artifacts/), [Sweeps](../../sweeps/), [Launch Jobs](../../launch/) and [Automations](../../artifacts/project-scoped-automations/). \ No newline at end of file diff --git a/content/guides/integrations/fastai/v1.md b/content/guides/integrations/fastai/v1.md index dfc919a19..aeb8e1d53 100644 --- a/content/guides/integrations/fastai/v1.md +++ b/content/guides/integrations/fastai/v1.md @@ -8,7 +8,7 @@ title: fastai v1 {{% alert %}} This documentation is for fastai v1. -If you use the current version of fastai, you should refer to [fastai page](../intro/). +If you use the current version of fastai, you should refer to [fastai page](../). {{% /alert %}} For scripts using fastai v1, we have a callback that can automatically log model topology, losses, metrics, weights, gradients, sample predictions and best trained model. diff --git a/content/guides/integrations/huggingface.md b/content/guides/integrations/huggingface.md index a59db0465..efb133cc4 100644 --- a/content/guides/integrations/huggingface.md +++ b/content/guides/integrations/huggingface.md @@ -151,7 +151,7 @@ Using TensorFlow? Just swap the PyTorch `Trainer` for the TensorFlow `TFTrainer` ### 4. Turn on model checkpointing -Using Weights & Biases' [Artifacts](../artifacts/intro/), you can store up to 100GB of models and datasets for free and then use the Weights & Biases [Model Registry](../model_registry/intro/) to register models to prepare them for staging or deployment in your production environment. +Using Weights & Biases' [Artifacts](../artifacts/), you can store up to 100GB of models and datasets for free and then use the Weights & Biases [Model Registry](../model_registry/) to register models to prepare them for staging or deployment in your production environment. Logging your Hugging Face model checkpoints to Artifacts can be done by setting the `WANDB_LOG_MODEL` environment variable to one of `end` or `checkpoint` or `false`: @@ -200,9 +200,9 @@ However, If you pass a [`run_name`](https://huggingface.co/docs/transformers/mai {{% /alert %}} #### W&B Model Registry -Once you have logged your checkpoints to Artifacts, you can then register your best model checkpoints and centralize them across your team using the Weights & Biases **[Model Registry](../model_registry/intro/)**. Here you can organize your best models by task, manage model lifecycle, facilitate easy tracking and auditing throughout the ML lifecyle, and [automate](/guides/artifacts/project-scoped-automations/#create-a-webhook-automation) downstream actions with webhooks or jobs. +Once you have logged your checkpoints to Artifacts, you can then register your best model checkpoints and centralize them across your team using the Weights & Biases **[Model Registry](../model_registry/)**. Here you can organize your best models by task, manage model lifecycle, facilitate easy tracking and auditing throughout the ML lifecyle, and [automate](/guides/artifacts/project-scoped-automations/#create-a-webhook-automation) downstream actions with webhooks or jobs. -See the [Model Registry](../model_registry/intro/) documentation for how to link a model Artifact to the Model Registry. +See the [Model Registry](../model_registry/) documentation for how to link a model Artifact to the Model Registry. ### 5. Visualise evaluation outputs during training @@ -238,7 +238,7 @@ Once you have logged your training results you can explore your results dynamica ### How do I save the best model? If `load_best_model_at_end=True` is set in the `TrainingArguments` that are passed to the `Trainer`, then W&B will save the best performing model checkpoint to Artifacts. -If you'd like to centralize all your best model versions across your team to organize them by ML task, stage them for production, bookmark them for further evaluation, or kick off downstream Model CI/CD processes then ensure you're saving your model checkpoints to Artifacts. Once logged to Artifacts, these checkpoints can then be promoted to the [Model Registry](../model_registry/intro/). +If you'd like to centralize all your best model versions across your team to organize them by ML task, stage them for production, bookmark them for further evaluation, or kick off downstream Model CI/CD processes then ensure you're saving your model checkpoints to Artifacts. Once logged to Artifacts, these checkpoints can then be promoted to the [Model Registry](../model_registry/). ### How do I load a saved model? diff --git a/content/guides/integrations/hydra.md b/content/guides/integrations/hydra.md index f19fc0b18..2495df5c7 100644 --- a/content/guides/integrations/hydra.md +++ b/content/guides/integrations/hydra.md @@ -57,7 +57,7 @@ $ export WANDB_START_METHOD=thread ## Optimize Hyperparameters -[W&B Sweeps](../../sweeps/intro/) is a highly scalable hyperparameter search platform, which provides interesting insights and visualization about W&B experiments with minimal requirements code real-estate. Sweeps integrates seamlessly with Hydra projects with no-coding requirements. The only thing needed is a configuration file describing the various parameters to sweep over as normal. +[W&B Sweeps](../../sweeps/) is a highly scalable hyperparameter search platform, which provides interesting insights and visualization about W&B experiments with minimal requirements code real-estate. Sweeps integrates seamlessly with Hydra projects with no-coding requirements. The only thing needed is a configuration file describing the various parameters to sweep over as normal. A simple example `sweep.yaml` file would be: diff --git a/content/guides/integrations/keras.md b/content/guides/integrations/keras.md index 824ea016c..c0dabe031 100644 --- a/content/guides/integrations/keras.md +++ b/content/guides/integrations/keras.md @@ -6,7 +6,7 @@ menu: title: Keras weight: 160 --- -{{< cta-button colabLink="https://colab.research.google.com/github/wandb/examples/blob/master/colabs/intro/Intro_to_Weights_%26_Biases_keras.ipynb" >}} +{{< cta-button colabLink="https://colab.research.google.com/github/wandb/examples/blob/master/colabs/Intro_to_Weights_%26_Biases_keras.ipynb" >}} ## Keras callbacks diff --git a/content/guides/integrations/lightgbm.md b/content/guides/integrations/lightgbm.md index 2bd3be48f..24c19c48d 100644 --- a/content/guides/integrations/lightgbm.md +++ b/content/guides/integrations/lightgbm.md @@ -29,7 +29,7 @@ Looking for working code examples? Check out [our repository of examples on GitH ## Tuning your hyperparameters with Sweeps -Attaining the maximum performance out of models requires tuning hyperparameters, like tree depth and learning rate. Weights & Biases includes [Sweeps](../sweeps/intro/), a powerful toolkit for configuring, orchestrating, and analyzing large hyperparameter testing experiments. +Attaining the maximum performance out of models requires tuning hyperparameters, like tree depth and learning rate. Weights & Biases includes [Sweeps](../sweeps/), a powerful toolkit for configuring, orchestrating, and analyzing large hyperparameter testing experiments. To learn more about these tools and see an example of how to use Sweeps with XGBoost, check out this interactive Colab notebook. diff --git a/content/guides/integrations/metaflow.md b/content/guides/integrations/metaflow.md index a08778129..4918cd705 100644 --- a/content/guides/integrations/metaflow.md +++ b/content/guides/integrations/metaflow.md @@ -125,7 +125,7 @@ class WandbExampleFlow(FlowSpec): ## Access your data programmatically -You can access the information we've captured in three ways: inside the original Python process being logged using the [`wandb` client library](../../../ref/python/README/), with the [web app UI](../../track/workspaces/), or programmatically using [our Public API](../../../ref/python/public-api/README/). `Parameter`s are saved to W&B's [`config`](../../track/config/) and can be found in the [Overview tab](../../runs/intro.md#overview-tab). `datasets`, `models`, and `others` are saved to [W&B Artifacts](../../artifacts/intro/) and can be found in the [Artifacts tab](../../runs/intro.md#artifacts-tab). Base python types are saved to W&B's [`summary`](../../track/log/intro/) dict and can be found in the Overview tab. See our [guide to the Public API](../../track/public-api-guide/) for details on using the API to get this information programmatically from outside . +You can access the information we've captured in three ways: inside the original Python process being logged using the [`wandb` client library](../../../ref/python/README/), with the [web app UI](../../track/workspaces/), or programmatically using [our Public API](../../../ref/python/public-api/README/). `Parameter`s are saved to W&B's [`config`](../../track/config/) and can be found in the [Overview tab](../../runs/intro.md#overview-tab). `datasets`, `models`, and `others` are saved to [W&B Artifacts](../../artifacts/) and can be found in the [Artifacts tab](../../runs/intro.md#artifacts-tab). Base python types are saved to W&B's [`summary`](../../track/log/) dict and can be found in the Overview tab. See our [guide to the Public API](../../track/public-api-guide/) for details on using the API to get this information programmatically from outside . ### Cheat sheet diff --git a/content/guides/integrations/openai-api.md b/content/guides/integrations/openai-api.md index 7d186b6b4..1ae94a853 100644 --- a/content/guides/integrations/openai-api.md +++ b/content/guides/integrations/openai-api.md @@ -68,7 +68,7 @@ response = openai.ChatCompletion.create(**chat_request_kwargs) ### 3. View your OpenAI API inputs and responses -Click on the W&B [run](../../runs/intro/) link generated by `autolog` in **step 1**. This redirects you to your project workspace in the W&B App. +Click on the W&B [run](../../runs/) link generated by `autolog` in **step 1**. This redirects you to your project workspace in the W&B App. Select a run you created to view the trace table, trace timeline and the model architecture of the OpenAI LLM used. diff --git a/content/guides/integrations/openai-fine-tuning.md b/content/guides/integrations/openai-fine-tuning.md index 64dcf850c..0705bf2d4 100644 --- a/content/guides/integrations/openai-fine-tuning.md +++ b/content/guides/integrations/openai-fine-tuning.md @@ -107,7 +107,7 @@ The datasets are visualized as W&B Tables, which allows you to explore, search, OpenAI gives you an id of the fine-tuned model. Since we don't have access to the model weights, the `WandbLogger` creates a `model_metadata.json` file with all the details (hyperparameters, data file ids, etc.) of the model along with the `fine_tuned_model`` id and is logged as a W&B Artifact. -This model (metadata) artifact can further be linked to a model in the [W&B Model Registry](../../model_registry/intro/). +This model (metadata) artifact can further be linked to a model in the [W&B Model Registry](../../model_registry/). {{< img src="/images/integrations/openai_model_metadata.png" alt="" >}} diff --git a/content/guides/integrations/pytorch.md b/content/guides/integrations/pytorch.md index 938db62d6..7955ac41c 100644 --- a/content/guides/integrations/pytorch.md +++ b/content/guides/integrations/pytorch.md @@ -6,7 +6,7 @@ menu: title: PyTorch weight: 300 --- -{{< cta-button colabLink="https://colab.research.google.com/github/wandb/examples/blob/master/colabs/intro/Intro_to_Weights_%26_Biases.ipynb" >}} +{{< cta-button colabLink="https://colab.research.google.com/github/wandb/examples/blob/master/colabs/Intro_to_Weights_%26_Biases.ipynb" >}} PyTorch is one of the most popular frameworks for deep learning in Python, especially among researchers. W&B provides first class support for PyTorch, from logging gradients to profiling your code on the CPU and GPU. @@ -72,7 +72,7 @@ wandb.log({"mnist_predictions": my_table}) {{< img src="/images/integrations/pytorch_example_table.png" alt="The code above generates a table like this one. This model's looking good!" >}} -For more on logging and visualizing datasets and models, check out our [guide to W&B Tables](../tables/intro/). +For more on logging and visualizing datasets and models, check out our [guide to W&B Tables](../tables/). ## Profile PyTorch code diff --git a/content/guides/integrations/spacy.md b/content/guides/integrations/spacy.md index 26c603b6c..34c92a1f2 100644 --- a/content/guides/integrations/spacy.md +++ b/content/guides/integrations/spacy.md @@ -52,7 +52,7 @@ model_log_interval = 1000 | ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `project_name` | `str`. The name of the W&B Project. The project will be created automatically if it doesn’t exist yet. | | `remove_config_values` | `List[str]` . A list of values to exclude from the config before it is uploaded to W&B. `[]` by default. | -| `model_log_interval` | `Optional int`. `None` by default. If set, [model versioning](../model_registry/intro/) with [Artifacts](../artifacts/intro/)will be enabled. Pass in the number of steps to wait between logging model checkpoints. `None` by default. | +| `model_log_interval` | `Optional int`. `None` by default. If set, [model versioning](../model_registry/) with [Artifacts](../artifacts/)will be enabled. Pass in the number of steps to wait between logging model checkpoints. `None` by default. | | `log_dataset_dir` | `Optional str`. If passed a path, the dataset will be uploaded as an Artifact at the beginning of training. `None` by default. | | `entity` | `Optional str` . If passed, the run will be created in the specified entity | | `run_name` | `Optional str` . If specified, the run will be created with the specified name. | @@ -89,4 +89,4 @@ python -m spacy train \ {{< /tabpane >}} -When training begins, a link to your training run's [W&B page](../runs/intro/) will be output which will take you to this run's experiment tracking [dashboard](../track/workspaces/) in the Weights & Biases web UI. \ No newline at end of file +When training begins, a link to your training run's [W&B page](../runs/) will be output which will take you to this run's experiment tracking [dashboard](../track/workspaces/) in the Weights & Biases web UI. \ No newline at end of file diff --git a/content/guides/integrations/torchtune.md b/content/guides/integrations/torchtune.md index 1b15c4279..3c6bf1b18 100644 --- a/content/guides/integrations/torchtune.md +++ b/content/guides/integrations/torchtune.md @@ -123,7 +123,7 @@ This is a fast evolving library, the current metrics are subject to change. If y The torchtune library supports various [checkpoint formats](https://pytorch.org/torchtune/stable/deep_dives/checkpointer.html). Depending on the origin of the model you are using, you should switch to the appropriate [checkpointer class](https://pytorch.org/torchtune/stable/deep_dives/checkpointer.html). -If you want to save the model checkpoints to [W&B Artifacts](../artifacts/intro/), the simplest solution is to override the `save_checkpoint` functions inside the corresponding recipe. +If you want to save the model checkpoints to [W&B Artifacts](../artifacts/), the simplest solution is to override the `save_checkpoint` functions inside the corresponding recipe. Here is an example of how you can override the `save_checkpoint` function to save the model checkpoints to W&B Artifacts. diff --git a/content/guides/integrations/ultralytics.md b/content/guides/integrations/ultralytics.md index 1f8b37cd9..95e09d42a 100644 --- a/content/guides/integrations/ultralytics.md +++ b/content/guides/integrations/ultralytics.md @@ -53,7 +53,7 @@ from wandb.integration.ultralytics import add_wandb_callback from ultralytics import YOLO ``` -Initialize the `YOLO` model of your choice, and invoke the `add_wandb_callback` function on it before performing inference with the model. This ensures that when you perform training, fine-tuning, validation, or inference, it automatically saves the experiment logs and the images, overlaid with both ground-truth and the respective prediction results using the [interactive overlays for computer vision tasks](../track/log/media#image-overlays-in-tables) on W&B along with additional insights in a [`wandb.Table`](../tables/intro/). +Initialize the `YOLO` model of your choice, and invoke the `add_wandb_callback` function on it before performing inference with the model. This ensures that when you perform training, fine-tuning, validation, or inference, it automatically saves the experiment logs and the images, overlaid with both ground-truth and the respective prediction results using the [interactive overlays for computer vision tasks](../track/log/media#image-overlays-in-tables) on W&B along with additional insights in a [`wandb.Table`](../tables/). ```python # Initialize YOLO Model @@ -76,7 +76,7 @@ Here's how experiments tracked using W&B for an Ultralytics training or fine-tun
YOLO Fine-tuning Experiments
-Here's how epoch-wise validation results are visualized using a [W&B Table](../tables/intro/): +Here's how epoch-wise validation results are visualized using a [W&B Table](../tables/):
WandB Validation Visualization Table
@@ -108,14 +108,14 @@ Download a few images to test the integration on. You can use still images, vide !wget https://raw.githubusercontent.com/wandb/examples/ultralytics/colabs/ultralytics/assets/img5.png ``` -Next, initialize a W&B [run](../runs/intro/) using `wandb.init`. +Next, initialize a W&B [run](../runs/) using `wandb.init`. ```python # Initialize W&B run wandb.init(project="ultralytics", job_type="inference") ``` -Next, initialize your desired `YOLO` model and invoke the `add_wandb_callback` function on it before you perform inference with the model. This ensures that when you perform inference, it automatically logs the images overlaid with your [interactive overlays for computer vision tasks](../track/log/media#image-overlays-in-tables) along with additional insights in a [`wandb.Table`](../tables/intro/). +Next, initialize your desired `YOLO` model and invoke the `add_wandb_callback` function on it before you perform inference with the model. This ensures that when you perform inference, it automatically logs the images overlaid with your [interactive overlays for computer vision tasks](../track/log/media#image-overlays-in-tables) along with additional insights in a [`wandb.Table`](../tables/). ```python # Initialize YOLO Model diff --git a/content/guides/integrations/xgboost.md b/content/guides/integrations/xgboost.md index db25c1a5b..a11830044 100644 --- a/content/guides/integrations/xgboost.md +++ b/content/guides/integrations/xgboost.md @@ -63,7 +63,7 @@ For additional examples, check out the [repository of examples on GitHub](https: ## Tune your hyperparameters with Sweeps -Attaining the maximum performance out of models requires tuning hyperparameters, like tree depth and learning rate. Weights & Biases includes [Sweeps](../sweeps/intro/), a powerful toolkit for configuring, orchestrating, and analyzing large hyperparameter testing experiments. +Attaining the maximum performance out of models requires tuning hyperparameters, like tree depth and learning rate. Weights & Biases includes [Sweeps](../sweeps/), a powerful toolkit for configuring, orchestrating, and analyzing large hyperparameter testing experiments. {{< cta-button colabLink="https://colab.research.google.com/github/wandb/examples/blob/master/colabs/boosting/Using_W%26B_Sweeps_with_XGBoost.ipynb" >}} diff --git a/content/guides/integrations/yolov5.md b/content/guides/integrations/yolov5.md index f58f82488..4bcf09307 100644 --- a/content/guides/integrations/yolov5.md +++ b/content/guides/integrations/yolov5.md @@ -18,7 +18,7 @@ All W&B logging features are compatible with data-parallel multi-GPU training, s {{% /alert %}} ## Track core experiments -Simply by installing `wandb`, you'll activate the built-in W&B [logging features](../track/log/intro/): system metrics, model metrics, and media logged to interactive [Dashboards](../track/workspaces/). +Simply by installing `wandb`, you'll activate the built-in W&B [logging features](../track/log/): system metrics, model metrics, and media logged to interactive [Dashboards](../track/workspaces/). ```python pip install wandb @@ -34,9 +34,9 @@ Just follow the links printed to the standard out by wandb. By passing a few simple command line arguments to YOLO, you can take advantage of even more W&B features. -* Passing a number to `--save_period` will turn on [model versioning](../model_registry/intro/). At the end of every `save_period` epochs, the model weights will be saved to W&B. The best-performing model on the validation set will be tagged automatically. +* Passing a number to `--save_period` will turn on [model versioning](../model_registry/). At the end of every `save_period` epochs, the model weights will be saved to W&B. The best-performing model on the validation set will be tagged automatically. * Turning on the `--upload_dataset` flag will also upload the dataset for data versioning. -* Passing a number to `--bbox_interval` will turn on [data visualization](../intro/). At the end of every `bbox_interval` epochs, the outputs of the model on the validation set will be uploaded to W&B. +* Passing a number to `--bbox_interval` will turn on [data visualization](../). At the end of every `bbox_interval` epochs, the outputs of the model on the validation set will be uploaded to W&B. {{< tabpane text=true >}} {{% tab header="Model Versioning Only" value="modelversioning" %}} diff --git a/content/guides/models/_index.md b/content/guides/models/_index.md index 9cc46ee0b..b856436b8 100644 --- a/content/guides/models/_index.md +++ b/content/guides/models/_index.md @@ -13,9 +13,9 @@ W&B Models is the system of record for ML Practitioners who want to organize the With W&B Models, you can: -- Track and visualize all [ML experiments](./track/intro/). -- Optimize and fine-tune models at scale with [hyperparameter sweeps](./sweeps/intro/). -- [Maintain a centralized hub of all models](./model_registry/intro/), with a seamless handoff point to devops and deployment +- Track and visualize all [ML experiments](./track/). +- Optimize and fine-tune models at scale with [hyperparameter sweeps](./sweeps/). +- [Maintain a centralized hub of all models](./model_registry/), with a seamless handoff point to devops and deployment - Configure custom automations that trigger key workflows for [model CI/CD](./model_registry/model-registry-automations/). diff --git a/content/guides/models/app/features/cascade-settings.md b/content/guides/models/app/features/cascade-settings.md index 2bfe3cb52..8bf3574ad 100644 --- a/content/guides/models/app/features/cascade-settings.md +++ b/content/guides/models/app/features/cascade-settings.md @@ -29,7 +29,7 @@ Configure a workspaces layout to define the overall structure of the workspace. {{< img src="/images/app_ui/workspace_layout_settings.png" alt="" >}} -The workspace layout options page shows whether the workspace generates panels automatically or manually. To adjust a workspace's panel generation mode, refer to [Panels](panels/intro/). +The workspace layout options page shows whether the workspace generates panels automatically or manually. To adjust a workspace's panel generation mode, refer to [Panels](panels/). This table describes each workspace layout option. diff --git a/content/guides/models/app/features/custom-charts/walkthrough.md b/content/guides/models/app/features/custom-charts/walkthrough.md index 148dbc566..665bafc18 100644 --- a/content/guides/models/app/features/custom-charts/walkthrough.md +++ b/content/guides/models/app/features/custom-charts/walkthrough.md @@ -12,7 +12,7 @@ Use custom charts to control the data you're loading in to a panel and its visua ## 1. Log data to W&B -First, log data in your script. Use [wandb.config](../../../../guides/track/config/) for single points set at the beginning of training, like hyperparameters. Use [wandb.log()](../../../../guides/track/log/intro/) for multiple points over time, and log custom 2D arrays with `wandb.Table()`. We recommend logging up to 10,000 data points per logged key. +First, log data in your script. Use [wandb.config](../../../../guides/track/config/) for single points set at the beginning of training, like hyperparameters. Use [wandb.log()](../../../../guides/track/log/) for multiple points over time, and log custom 2D arrays with `wandb.Table()`. We recommend logging up to 10,000 data points per logged key. ```python # Logging a custom table of data diff --git a/content/guides/models/app/features/panels/_index.md b/content/guides/models/app/features/panels/_index.md index f378c7755..4372d60af 100644 --- a/content/guides/models/app/features/panels/_index.md +++ b/content/guides/models/app/features/panels/_index.md @@ -58,7 +58,7 @@ To add a custom panel to your workspace: 1. Select the type of panel you’d like to create. 1. Follow the prompts to configure the panel. -To learn more about the options for each type of panel, refer to the relevant section below, such as [Line plots](line-plot/intro/) or [Bar plots](bar-plot/). +To learn more about the options for each type of panel, refer to the relevant section below, such as [Line plots](line-plot/) or [Bar plots](bar-plot/). ## Manage panels diff --git a/content/guides/models/app/features/panels/parallel-coordinates.md b/content/guides/models/app/features/panels/parallel-coordinates.md index cda54e963..c00b2f8eb 100644 --- a/content/guides/models/app/features/panels/parallel-coordinates.md +++ b/content/guides/models/app/features/panels/parallel-coordinates.md @@ -12,7 +12,7 @@ Parallel coordinates charts summarize the relationship between large numbers of {{< img src="/images/app_ui/parallel_coordinates.gif" alt="" >}} -* **Axes**: Different hyperparameters from [`wandb.config`](../../../../guides/track/config/) and metrics from [`wandb.log`](../../../../guides/track/log/intro/). +* **Axes**: Different hyperparameters from [`wandb.config`](../../../../guides/track/config/) and metrics from [`wandb.log`](../../../../guides/track/log/). * **Lines**: Each line represents a single run. Mouse over a line to see a tooltip with details about the run. All lines that match the current filters will be shown, but if you turn off the eye, lines will be grayed out. ## Panel Settings diff --git a/content/guides/models/automations/model-registry-automations.md b/content/guides/models/automations/model-registry-automations.md index 6bb542ff1..e0c320c0b 100644 --- a/content/guides/models/automations/model-registry-automations.md +++ b/content/guides/models/automations/model-registry-automations.md @@ -44,7 +44,7 @@ To use a secret in your webhook, you must first add that secret to your team's s {{% alert %}} * Only W&B Admins can create, edit, or delete a secret. * Skip this section if the external server you send HTTP POST requests to does not use secrets. -* Secrets are also available if you use [W&B Server](../hosting/intro/) in an Azure, GCP, or AWS deployment. Connect with your W&B account team to discuss how you can use secrets in W&B if you use a different deployment type. +* Secrets are also available if you use [W&B Server](../hosting/) in an Azure, GCP, or AWS deployment. Connect with your W&B account team to discuss how you can use secrets in W&B if you use a different deployment type. {{% /alert %}} There are two types of secrets W&B suggests that you create when you use a webhook automation: diff --git a/content/guides/models/automations/project-scoped-automations.md b/content/guides/models/automations/project-scoped-automations.md index b360ea1e5..5e2429ad9 100644 --- a/content/guides/models/automations/project-scoped-automations.md +++ b/content/guides/models/automations/project-scoped-automations.md @@ -14,7 +14,7 @@ Create an automation that triggers when an artifact is changed. Use artifact aut {{% alert %}} Artifact automations are scoped to a project. This means that only events within a project will trigger an artifact automation. -This is in contrast to automations created in the W&B Model Registry. Automations created in the model registry are in scope of the Model Registry. They are triggered when events are performed on model versions linked to the [Model Registry](../model_registry/intro/). For information on how to create an automations for model versions, see the [Automations for Model CI/CD](../model_registry/model-registry-automations/) page in the [Model Registry chapter](../model_registry/intro/). +This is in contrast to automations created in the W&B Model Registry. Automations created in the model registry are in scope of the Model Registry. They are triggered when events are performed on model versions linked to the [Model Registry](../model_registry/). For information on how to create an automations for model versions, see the [Automations for Model CI/CD](../model_registry/model-registry-automations/) page in the [Model Registry chapter](../model_registry/). {{% /alert %}} @@ -42,7 +42,7 @@ To use a secret in your webhook, you must first add that secret to your team's s {{% alert %}} * Only W&B Admins can create, edit, or delete a secret. * Skip this section if the external server you send HTTP POST requests to does not use secrets. -* Secrets are also available if you use [W&B Server](../hosting/intro/) in an Azure, GCP, or AWS deployment. Connect with your W&B account team to discuss how you can use secrets in W&B if you use a different deployment type. +* Secrets are also available if you use [W&B Server](../hosting/) in an Azure, GCP, or AWS deployment. Connect with your W&B account team to discuss how you can use secrets in W&B if you use a different deployment type. {{% /alert %}} diff --git a/content/guides/models/registry/_index.md b/content/guides/models/registry/_index.md index f5fbf8d29..04ce3826a 100644 --- a/content/guides/models/registry/_index.md +++ b/content/guides/models/registry/_index.md @@ -17,9 +17,9 @@ W&B Registry is now in public preview. Visit [this](#enable-wb-registry) section {{% /alert %}} -W&B Registry is a curated central repository of [artifact](../artifacts/intro/) versions within your organization. Users who [have permission](./configure_registry/) within your organization can [download](./download_use_artifact/), share, and collaboratively manage the lifecycle of all artifacts, regardless of the team that user belongs to. +W&B Registry is a curated central repository of [artifact](../artifacts/) versions within your organization. Users who [have permission](./configure_registry/) within your organization can [download](./download_use_artifact/), share, and collaboratively manage the lifecycle of all artifacts, regardless of the team that user belongs to. -You can use the Registry to [track artifact versions](./link_version/), audit the history of an artifact's usage and changes, ensure governance and compliance of your artifacts, and [automate downstream processes such as model CI/CD](../automations/intro/). +You can use the Registry to [track artifact versions](./link_version/), audit the history of an artifact's usage and changes, ensure governance and compliance of your artifacts, and [automate downstream processes such as model CI/CD](../automations/). In summary, use W&B Registry to: diff --git a/content/guides/models/registry/download_use_artifact.md b/content/guides/models/registry/download_use_artifact.md index bad5d00fd..d3f357eb8 100644 --- a/content/guides/models/registry/download_use_artifact.md +++ b/content/guides/models/registry/download_use_artifact.md @@ -52,7 +52,7 @@ fetched_artifact = run.use_artifact(artifact_or_name = artifact_name) download_path = fetched_artifact.download() ``` -The `.use_artifact()` method both creates a [run](../runs/intro/) and marks the artifact you download as the input to that run. +The `.use_artifact()` method both creates a [run](../runs/) and marks the artifact you download as the input to that run. Marking an artifact as the input to a run enables W&B to track the lineage of that artifact. If you do not want to create a run, you can use the `wandb.Api()` object to access the artifact: diff --git a/content/guides/models/registry/link_version.md b/content/guides/models/registry/link_version.md index 989d89a17..e39e3d778 100644 --- a/content/guides/models/registry/link_version.md +++ b/content/guides/models/registry/link_version.md @@ -106,7 +106,7 @@ If you want to link an artifact version to the Model registry or the Dataset reg diff --git a/content/guides/models/registry/model_registry/_index.md b/content/guides/models/registry/model_registry/_index.md index 06f8c2511..eb9f3c525 100644 --- a/content/guides/models/registry/model_registry/_index.md +++ b/content/guides/models/registry/model_registry/_index.md @@ -12,7 +12,7 @@ cascade: --- {{% alert %}} -W&B will no longer support W&B Model Registry after 2024. Users are encouraged to instead use [W&B Registry](../registry/intro/) for linking and sharing their model artifacts versions. W&B Registry broadens the capabilities of the legacy W&B Model Registry. For more information about W&B Registry, see the [Registry docs](../registry/intro/). +W&B will no longer support W&B Model Registry after 2024. Users are encouraged to instead use [W&B Registry](../registry/) for linking and sharing their model artifacts versions. W&B Registry broadens the capabilities of the legacy W&B Model Registry. For more information about W&B Registry, see the [Registry docs](../registry/). W&B will migrate existing model artifacts linked to the legacy Model Registry to the new W&B Registry in the Fall or early Winter of 2024. See [Migrating from legacy Model Registry](../registry/model_registry_eol/) for information about the migration process. diff --git a/content/guides/models/registry/model_registry/consume-models.md b/content/guides/models/registry/model_registry/consume-models.md index bcac916d9..977694f04 100644 --- a/content/guides/models/registry/model_registry/consume-models.md +++ b/content/guides/models/registry/model_registry/consume-models.md @@ -62,7 +62,7 @@ downloaded_model_path = run.use_model(name=f"{entity/project/model_artifact_name {{% alert title="Planned deprecation for W&B Model Registry in 2024" %}} The proceeding tabs demonstrate how to consume model artifacts using the soon to be deprecated Model Registry. -Use the W&B Registry to track, organize and consume model artifacts. For more information see the [Registry docs](../registry/intro/). +Use the W&B Registry to track, organize and consume model artifacts. For more information see the [Registry docs](../registry/). {{% /alert %}} {{< tabpane text=true >}} diff --git a/content/guides/models/registry/model_registry/link-model-version.md b/content/guides/models/registry/model_registry/link-model-version.md index 17daf5a6a..4abf96bf6 100644 --- a/content/guides/models/registry/model_registry/link-model-version.md +++ b/content/guides/models/registry/model_registry/link-model-version.md @@ -14,7 +14,7 @@ Link a model version to a registered model with the W&B App or programmatically ## Programmatically link a model -Use the [`link_model`](../../ref/python/run.md#link_model) method to programmatically log model files to a W&B run and link it to the [W&B Model Registry](./intro/). +Use the [`link_model`](../../ref/python/run.md#link_model) method to programmatically log model files to a W&B run and link it to the [W&B Model Registry](./). Ensure to replace other the values enclosed in `<>` with your own: diff --git a/content/guides/models/registry/model_registry/walkthrough.md b/content/guides/models/registry/model_registry/walkthrough.md index dd2e44277..82c1d93b1 100644 --- a/content/guides/models/registry/model_registry/walkthrough.md +++ b/content/guides/models/registry/model_registry/walkthrough.md @@ -68,7 +68,7 @@ def generate_raw_data(train_size=6000): (x_train, y_train), (x_eval, y_eval) = generate_raw_data() ``` -Next, upload the dataset to W&B. To do this, create an [artifact](../artifacts/intro/) object and add the dataset to that artifact. +Next, upload the dataset to W&B. To do this, create an [artifact](../artifacts/) object and add the dataset to that artifact. ```python project = "model-registry-dev" @@ -210,7 +210,7 @@ model.save(path) ## Log and link a model to the Model Registry -Use the [`link_model`](../../ref/python/run.md#link_model) API to log model one ore more files to a W&B run and link it to the [W&B Model Registry](./intro/). +Use the [`link_model`](../../ref/python/run.md#link_model) API to log model one ore more files to a W&B run and link it to the [W&B Model Registry](./). ```python path = "./model.h5" diff --git a/content/guides/models/registry/model_registry_eol.md b/content/guides/models/registry/model_registry_eol.md index 8176d5163..03be1f3bf 100644 --- a/content/guides/models/registry/model_registry_eol.md +++ b/content/guides/models/registry/model_registry_eol.md @@ -7,7 +7,7 @@ title: Migrate from legacy Model Registry weight: 8 --- -W&B will transition assets from the legacy [W&B Model Registry](../model_registry/intro/) to the new [W&B Registry](./intro/). This migration will be fully managed and triggered by W&B, requiring no intervention from users. The process is designed to be as seamless as possible, with minimal disruption to existing workflows. +W&B will transition assets from the legacy [W&B Model Registry](../model_registry/) to the new [W&B Registry](./). This migration will be fully managed and triggered by W&B, requiring no intervention from users. The process is designed to be as seamless as possible, with minimal disruption to existing workflows. The transition will take place once the new W&B Registry includes all the functionalities currently available in the Model Registry. W&B will attempt to preserve current workflows, codebases, and references. @@ -83,7 +83,7 @@ Users are encouraged to explore the new features and capabilities available in t Support is available if you are interested in trying the W&B Registry early, or for new users that prefer to start with Registry and not the legacy W&B Model Registry. Contact support@wandb.com or your Sales MLE to enable this functionality. Note that any early migration will be into a BETA version. The BETA version of W&B Registry might not have all the functionality or features of the legacy Model Registry. -For more details and to learn about the full range of features in the W&B Registry, visit the [W&B Registry Guide](./intro/). +For more details and to learn about the full range of features in the W&B Registry, visit the [W&B Registry Guide](./). ## FAQs diff --git a/content/guides/models/sweeps/useful-resources.md b/content/guides/models/sweeps/useful-resources.md index 840291481..ac88bd8fd 100644 --- a/content/guides/models/sweeps/useful-resources.md +++ b/content/guides/models/sweeps/useful-resources.md @@ -19,7 +19,7 @@ The following W&B Reports demonstrate examples of projects that explore hyperpar * Description: Developing the baseline and exploring submissions to the Drought Watch benchmark. * [Tuning Safety Penalties in Reinforcement Learning](https://wandb.ai/safelife/benchmark-sweeps/reports/Tuning-Safety-Penalties-in-Reinforcement-Learning---VmlldzoyNjQyODM) * Description: We examine agents trained with different side effect penalties on three different tasks: pattern creation, pattern removal, and navigation. -* [Meaning and Noise in Hyperparameter Search with W&B](https://wandb.ai/stacey/pytorch_intro/reports/Meaning-and-Noise-in-Hyperparameter-Search--Vmlldzo0Mzk5MQ) [Stacey Svetlichnaya](https://wandb.ai/stacey) +* [Meaning and Noise in Hyperparameter Search with W&B](https://wandb.ai/stacey/pytorch_reports/Meaning-and-Noise-in-Hyperparameter-Search--Vmlldzo0Mzk5MQ) [Stacey Svetlichnaya](https://wandb.ai/stacey) * Description: How do we distinguish signal from pareidolia (imaginary patterns)? This article is showcases what is possible with W&B and aims to inspire further exploration. * [Who is Them? Text Disambiguation with Transformers](https://wandb.ai/stacey/winograd/reports/Who-is-Them-Text-Disambiguation-with-Transformers--VmlldzoxMDU1NTc) * Description: Using Hugging Face to explore models for natural language understanding diff --git a/content/guides/models/sweeps/visualize-sweep-results.md b/content/guides/models/sweeps/visualize-sweep-results.md index 9ef5f32d2..154d54862 100644 --- a/content/guides/models/sweeps/visualize-sweep-results.md +++ b/content/guides/models/sweeps/visualize-sweep-results.md @@ -27,4 +27,4 @@ The parameter importance plot(right) lists the hyperparameters that were the bes You can alter the dependent and independent values (x and y axis) that are automatically used. Within each panel there is a pencil icon called **Edit panel**. Choose **Edit panel**. A model will appear. Within the modal, you can alter the behavior of the graph. -For more information on all default W&B visualization options, see [Panels](../app/features/panels/intro/). See the [Data Visualization docs](../tables/intro/) for information on how to create plots from W&B Runs that are not part of a W&B Sweep. \ No newline at end of file +For more information on all default W&B visualization options, see [Panels](../app/features/panels/). See the [Data Visualization docs](../tables/) for information on how to create plots from W&B Runs that are not part of a W&B Sweep. \ No newline at end of file diff --git a/content/guides/models/sweeps/walkthrough.md b/content/guides/models/sweeps/walkthrough.md index 3db58b511..fcd6b0e39 100644 --- a/content/guides/models/sweeps/walkthrough.md +++ b/content/guides/models/sweeps/walkthrough.md @@ -118,7 +118,7 @@ wandb.agent(sweep_id, function=main, count=10) ## Visualize results (optional) -Open your project to see your live results in the W&B App dashboard. With just a few clicks, construct rich, interactive charts like [parallel coordinates plots](../app/features/panels/parallel-coordinates/),[ parameter importance analyzes](../app/features/panels/parameter-importance/), and [more](../app/features/panels/intro/). +Open your project to see your live results in the W&B App dashboard. With just a few clicks, construct rich, interactive charts like [parallel coordinates plots](../app/features/panels/parallel-coordinates/),[ parameter importance analyzes](../app/features/panels/parameter-importance/), and [more](../app/features/panels/). {{< img src="/images/sweeps/quickstart_dashboard_example.png" alt="Sweeps Dashboard example" >}} diff --git a/content/guides/models/track/_index.md b/content/guides/models/track/_index.md index 4ec4dc975..67997fa8d 100644 --- a/content/guides/models/track/_index.md +++ b/content/guides/models/track/_index.md @@ -10,22 +10,22 @@ weight: 1 cascade: - url: guides/track/:filename --- -{{< cta-button productLink="https://wandb.ai/stacey/deep-drive/workspace?workspace=user-lavanyashukla" colabLink="https://colab.research.google.com/github/wandb/examples/blob/master/colabs/intro/Intro_to_Weights_%26_Biases.ipynb" >}} +{{< cta-button productLink="https://wandb.ai/stacey/deep-drive/workspace?workspace=user-lavanyashukla" colabLink="https://colab.research.google.com/github/wandb/examples/blob/master/colabs/Intro_to_Weights_%26_Biases.ipynb" >}} Track machine learning experiments with a few lines of code. You can then review the results in an [interactive dashboard](../track/workspaces/) or export your data to Python for programmatic access using our [Public API](../../ref/python/public-api/README/). -Utilize W&B Integrations if you use popular frameworks such as [PyTorch](../integrations/pytorch/), [Keras](../integrations/keras/), or [Scikit](../integrations/scikit/). See our [Integration guides](../integrations/intro/) for a for a full list of integrations and information on how to add W&B to your code. +Utilize W&B Integrations if you use popular frameworks such as [PyTorch](../integrations/pytorch/), [Keras](../integrations/keras/), or [Scikit](../integrations/scikit/). See our [Integration guides](../integrations/) for a for a full list of integrations and information on how to add W&B to your code. {{< img src="/images/experiments/experiments_landing_page.png" alt="" >}} -The image above shows an example dashboard where you can view and compare metrics across multiple [runs](../runs/intro/). +The image above shows an example dashboard where you can view and compare metrics across multiple [runs](../runs/). ## How it works Track a machine learning experiment with a few lines of code: -1. Create a [W&B run](../runs/intro/). +1. Create a [W&B run](../runs/). 2. Store a dictionary of hyperparameters, such as learning rate or model type, into your configuration ([`wandb.config`](./config/)). -3. Log metrics ([`wandb.log()`](./log/intro/)) over time in a training loop, such as accuracy and loss. +3. Log metrics ([`wandb.log()`](./log/)) over time in a training loop, such as accuracy and loss. 4. Save outputs of a run, like the model weights or a table of predictions. The proceeding pseudocode demonstrates a common W&B Experiment tracking workflow: diff --git a/content/guides/models/track/environment-variables.md b/content/guides/models/track/environment-variables.md index 86d5ad36e..d1a92c9af 100644 --- a/content/guides/models/track/environment-variables.md +++ b/content/guides/models/track/environment-variables.md @@ -37,7 +37,7 @@ Use these optional environment variables to do things like set up authentication | --------------------------- | ---------- | | **WANDB_ANONYMOUS** | Set this to `allow`, `never`, or `must` to let users create anonymous runs with secret urls. | | **WANDB_API_KEY** | Sets the authentication key associated with your account. You can find your key on [your settings page](https://app.wandb.ai/settings). This must be set if `wandb login` hasn't been run on the remote machine. | -| **WANDB_BASE_URL** | If you're using [wandb/local](../hosting/intro/) you should set this environment variable to `http://YOUR_IP:YOUR_PORT` | +| **WANDB_BASE_URL** | If you're using [wandb/local](../hosting/) you should set this environment variable to `http://YOUR_IP:YOUR_PORT` | | **WANDB_CACHE_DIR** | This defaults to \~/.cache/wandb, you can override this location with this environment variable | | **WANDB_CONFIG_DIR** | This defaults to \~/.config/wandb, you can override this location with this environment variable | | **WANDB_CONFIG_PATHS** | Comma separated list of yaml files to load into wandb.config. See [config](./config.md#file-based-configs). | diff --git a/content/guides/models/track/jupyter.md b/content/guides/models/track/jupyter.md index 17e45e7e8..5fa3158b8 100644 --- a/content/guides/models/track/jupyter.md +++ b/content/guides/models/track/jupyter.md @@ -14,7 +14,7 @@ Use W&B with Jupyter to get interactive visualizations without leaving your note ## Use cases for W&B with Jupyter notebooks 1. **Iterative experimentation**: Run and re-run experiments, tweaking parameters, and have all the runs you do saved automatically to W&B without having to take manual notes along the way. -2. **Code saving**: When reproducing a model, it's hard to know which cells in a notebook ran, and in which order. Turn on code saving on your [settings page](../app/settings-page/intro/) to save a record of cell execution for each experiment. +2. **Code saving**: When reproducing a model, it's hard to know which cells in a notebook ran, and in which order. Turn on code saving on your [settings page](../app/settings-page/) to save a record of cell execution for each experiment. 3. **Custom analysis**: Once runs are logged to W&B, it's easy to get a dataframe from the API and do custom analysis, then log those results to W&B to save and share in reports. ## Getting started in a notebook @@ -80,7 +80,7 @@ wandb.run ``` {{% alert %}} -Want to know more about what you can do with W&B? Check out our [guide to logging data and media](log/intro/), learn [how to integrate us with your favorite ML toolkits](../integrations/intro/), or just dive straight into the [reference docs](../../ref/python/README/) or our [repo of examples](https://github.com/wandb/examples). +Want to know more about what you can do with W&B? Check out our [guide to logging data and media](log/), learn [how to integrate us with your favorite ML toolkits](../integrations/), or just dive straight into the [reference docs](../../ref/python/README/) or our [repo of examples](https://github.com/wandb/examples). {{% /alert %}} ## Additional Jupyter features in W&B diff --git a/content/guides/models/track/launch.md b/content/guides/models/track/launch.md index 0b0a98632..513f399ff 100644 --- a/content/guides/models/track/launch.md +++ b/content/guides/models/track/launch.md @@ -72,14 +72,14 @@ for epoch in range(wandb.config.epochs): # model performance wandb.log({"accuracy": accuracy, "loss": loss}) ``` -For more information on different data types you can log with W&B, see [Log Data During Experiments](./log/intro/). +For more information on different data types you can log with W&B, see [Log Data During Experiments](./log/). ### Log an artifact to W&B Optionally log a W&B Artifact. Artifacts make it easy to version datasets and models. ```python wandb.log_artifact(model) ``` -For more information about Artifacts, see the [Artifacts Chapter](../artifacts/intro/). For more information about versioning models, see [Model Management](../model_registry/intro/). +For more information about Artifacts, see the [Artifacts Chapter](../artifacts/). For more information about versioning models, see [Model Management](../model_registry/). ### Putting it all together @@ -113,7 +113,7 @@ wandb.save("model.onnx") ``` ## Next steps: Visualize your experiment -Use the W&B Dashboard as a central place to organize and visualize results from your machine learning models. With just a few clicks, construct rich, interactive charts like [parallel coordinates plots](../app/features/panels/parallel-coordinates/),[ parameter importance analyzes](../app/features/panels/parameter-importance/), and [more](../app/features/panels/intro/). +Use the W&B Dashboard as a central place to organize and visualize results from your machine learning models. With just a few clicks, construct rich, interactive charts like [parallel coordinates plots](../app/features/panels/parallel-coordinates/),[ parameter importance analyzes](../app/features/panels/parameter-importance/), and [more](../app/features/panels/). {{< img src="/images/sweeps/quickstart_dashboard_example.png" alt="Quickstart Sweeps Dashboard example" >}} diff --git a/content/guides/models/track/log/_index.md b/content/guides/models/track/log/_index.md index ac433afca..d411fc927 100644 --- a/content/guides/models/track/log/_index.md +++ b/content/guides/models/track/log/_index.md @@ -11,7 +11,7 @@ cascade: - url: guides/track/log/:filename --- -Log a dictionary of metrics, media, or custom objects to a step with the W&B Python SDK. W&B collects the key-value pairs during each step and stores them in one unified dictionary each time you log data with `wandb.log()`. Data logged from your script is saved locally to your machine in a directory called `wandb`, then synced to the W&B cloud or your [private server](../../hosting/intro/). +Log a dictionary of metrics, media, or custom objects to a step with the W&B Python SDK. W&B collects the key-value pairs during each step and stores them in one unified dictionary each time you log data with `wandb.log()`. Data logged from your script is saved locally to your machine in a directory called `wandb`, then synced to the W&B cloud or your [private server](../../hosting/). {{% alert %}} Key-value pairs are stored in one unified dictionary only if you pass the same value for each step. W&B writes all of the collected keys and values to memory if you log a different value for `step`. @@ -46,8 +46,8 @@ wandb.log({'accuracy': 0.8}) W&B automatically logs the following information during a W&B Experiment: -* **System metrics**: CPU and GPU utilization, network, etc. These are shown in the System tab on the [run page](../../runs/intro/). For the GPU, these are fetched with [`nvidia-smi`](https://developer.nvidia.com/nvidia-system-management-interface). -* **Command line**: The stdout and stderr are picked up and show in the logs tab on the [run page.](../../runs/intro/) +* **System metrics**: CPU and GPU utilization, network, etc. These are shown in the System tab on the [run page](../../runs/). For the GPU, these are fetched with [`nvidia-smi`](https://developer.nvidia.com/nvidia-system-management-interface). +* **Command line**: The stdout and stderr are picked up and show in the logs tab on the [run page.](../../runs/) Turn on [Code Saving](http://wandb.me/code-save-colab) in your account's [Settings page](https://wandb.ai/settings) to log: @@ -81,4 +81,4 @@ wandb.log({"loss": 0.314, "epoch": 5, 1. **Compare the best accuracy**: To compare the best value of a metric across runs, set the summary value for that metric. By default, summary is set to the last value you logged for each key. This is useful in the table in the UI, where you can sort and filter runs based on their summary metrics, to help compare runs in a table or bar chart based on their _best_ accuracy, instead of final accuracy. For example: `wandb.run.summary["best_accuracy"] = best_accuracy` 2. **Multiple metrics on one chart**: Log multiple metrics in the same call to `wandb.log`, like this: `wandb.log({"acc'": 0.9, "loss": 0.1})` and they will both be available to plot against in the UI 3. **Custom x-axis**: Add a custom x-axis to the same log call to visualize your metrics against a different axis in the W&B dashboard. For example: `wandb.log({'acc': 0.9, 'epoch': 3, 'batch': 117})`. To set the default x-axis for a given metric use [Run.define_metric()](../../../ref/python/run.md#define_metric) -4. **Log rich media and charts**: `wandb.log` supports the logging of a wide variety of data types, from [media like images and videos](./media/) to [tables](./log-tables/) and [charts](../../app/features/custom-charts/intro/). \ No newline at end of file +4. **Log rich media and charts**: `wandb.log` supports the logging of a wide variety of data types, from [media like images and videos](./media/) to [tables](./log-tables/) and [charts](../../app/features/custom-charts/). \ No newline at end of file diff --git a/content/guides/models/track/log/log-models.md b/content/guides/models/track/log/log-models.md index 7fd9d2bac..aa7be5ede 100644 --- a/content/guides/models/track/log/log-models.md +++ b/content/guides/models/track/log/log-models.md @@ -14,18 +14,18 @@ The following guide describes how to log models to a W&B run and interact with t {{% alert %}} The following APIs are useful for tracking models as a part of your experiment tracking workflow. Use the APIs listed on this page to log models to a run, and to access metrics, tables, media, and other objects. -W&B suggests that you use [W&B Artifacts](../../artifacts/intro/) if you want to: +W&B suggests that you use [W&B Artifacts](../../artifacts/) if you want to: - Create and keep track of different versions of serialized data besides models, such as datasets, prompts, and more. - Explore [lineage graphs](../../artifacts/explore-and-traverse-an-artifact-graph/) of a model or any other objects tracked in W&B. - Interact with the model artifacts these methods created, such as [updating properties](../../artifacts/update-an-artifact/) (metadata, aliases, and descriptions) -For more information on W&B Artifacts and advanced versioning use cases, see the [Artifacts](../../artifacts/intro/) documentation. +For more information on W&B Artifacts and advanced versioning use cases, see the [Artifacts](../../artifacts/) documentation. {{% /alert %}} ## Log a model to a run Use the [`log_model`](../../../ref/python/run.md#log_model) to log a model artifact that contains content within a directory you specify. The [`log_model`](../../../ref/python/run.md#log_model) method also marks the resulting model artifact as an output of the W&B run. -You can track a model's dependencies and the model's associations if you mark the model as the input or output of a W&B run. View the lineage of the model within the W&B App UI. See the [Explore and traverse artifact graphs](../../artifacts/explore-and-traverse-an-artifact-graph/) page within the [Artifacts](../../artifacts/intro/) chapter for more information. +You can track a model's dependencies and the model's associations if you mark the model as the input or output of a W&B run. View the lineage of the model within the W&B App UI. See the [Explore and traverse artifact graphs](../../artifacts/explore-and-traverse-an-artifact-graph/) page within the [Artifacts](../../artifacts/) chapter for more information. Provide the path where your model files are saved to the `path` parameter. The path can be a local file, directory, or [reference URI](../../artifacts/track-external-files.md#amazon-s3--gcs--azure-blob-storage-references) to an external bucket such as `s3://bucket/path`. @@ -160,17 +160,17 @@ See [`use_model`](../../../ref/python/run.md#use_model) in the API Reference gui The [`link_model`](../../../ref/python/run.md#link_model) method is currently only compatible with the legacy W&B Model Registry, which will soon be deprecated. To learn how to link a model artifact to the new edition of model registry, visit the Registry [docs](../../registry/link_version/). {{% /alert %}} -Use the [`link_model`](../../../ref/python/run.md#link_model) method to log model files to a W&B run and link it to the [W&B Model Registry](../../model_registry/intro/). If no registered model exists, W&B will create a new one for you with the name you provide for the `registered_model_name` parameter. +Use the [`link_model`](../../../ref/python/run.md#link_model) method to log model files to a W&B run and link it to the [W&B Model Registry](../../model_registry/). If no registered model exists, W&B will create a new one for you with the name you provide for the `registered_model_name` parameter. {{% alert %}} You can think of linking a model similar to 'bookmarking' or 'publishing' a model to a centralized team repository of models that others members of your team can view and consume. -Note that when you link a model, that model is not duplicated in the [Model Registry](../../model_registry/intro/). That model is also not moved out of the project and intro the registry. A linked model is a pointer to the original model in your project. +Note that when you link a model, that model is not duplicated in the [Model Registry](../../model_registry/). That model is also not moved out of the project and intro the registry. A linked model is a pointer to the original model in your project. -Use the [Model Registry](../../model_registry/intro/) to organize your best models by task, manage model lifecycle, facilitate easy tracking and auditing throughout the ML lifecyle, and [automate](../../model_registry/model-registry-automations/) downstream actions with webhooks or jobs. +Use the [Model Registry](../../model_registry/) to organize your best models by task, manage model lifecycle, facilitate easy tracking and auditing throughout the ML lifecyle, and [automate](../../model_registry/model-registry-automations/) downstream actions with webhooks or jobs. {{% /alert %}} -A *Registered Model* is a collection or folder of linked model versions in the [Model Registry](../../model_registry/intro/). Registered models typically represent candidate models for a single modeling use case or task. +A *Registered Model* is a collection or folder of linked model versions in the [Model Registry](../../model_registry/). Registered models typically represent candidate models for a single modeling use case or task. The proceeding code snippet shows how to link a model with the [`link_model`](../../../ref/python/run.md#link_model) API. Ensure to replace other the values enclosed in `<>` with your own: diff --git a/content/guides/models/track/log/log-tables.md b/content/guides/models/track/log/log-tables.md index 14de0d920..403665cb1 100644 --- a/content/guides/models/track/log/log-tables.md +++ b/content/guides/models/track/log/log-tables.md @@ -146,7 +146,7 @@ with wandb.init() as run: my_table = run.use_artifact("run--:").get("") ``` -For more information on Artifacts, see the [Artifacts Chapter](../../artifacts/intro/) in the Developer Guide. +For more information on Artifacts, see the [Artifacts Chapter](../../artifacts/) in the Developer Guide. ### Visualize tables diff --git a/content/guides/models/track/log/media.md b/content/guides/models/track/log/media.md index 4aac6c71c..c1ac05cce 100644 --- a/content/guides/models/track/log/media.md +++ b/content/guides/models/track/log/media.md @@ -306,7 +306,7 @@ wandb.log( -If histograms are in your summary they will appear on the Overview tab of the [Run Page](../../runs/intro/). If they are in your history, we plot a heatmap of bins over time on the Charts tab. +If histograms are in your summary they will appear on the Overview tab of the [Run Page](../../runs/). If they are in your history, we plot a heatmap of bins over time on the Charts tab. ## 3D visualizations @@ -620,7 +620,7 @@ wandb.log({"video": wandb.Video(numpy_array_or_path_to_video, fps=4, format="gif If a numpy array is supplied we assume the dimensions are, in order: time, channels, width, height. By default we create a 4 fps gif image ([`ffmpeg`](https://www.ffmpeg.org) and the [`moviepy`](https://pypi.org/project/moviepy/) python library are required when passing numpy objects). Supported formats are `"gif"`, `"mp4"`, `"webm"`, and `"ogg"`. If you pass a string to `wandb.Video` we assert the file exists and is a supported format before uploading to wandb. Passing a `BytesIO` object will create a temporary file with the specified format as the extension. -On the W&B [Run](../../runs/intro/) and [Project](../../track/project-page/) Pages, you will see your videos in the Media section. +On the W&B [Run](../../runs/) and [Project](../../track/project-page/) Pages, you will see your videos in the Media section. {{% /tab %}} {{% tab header="Text" %}} Use `wandb.Table` to log text in tables to show up in the UI. By default, the column headers are `["Input", "Output", "Expected"]`. To ensure optimal UI performance, the default maximum number of rows is set to 10,000. However, users can explicitly override the maximum with `wandb.Table.MAX_ROWS = {DESIRED_MAX}`. diff --git a/content/guides/models/track/log/working-with-csv.md b/content/guides/models/track/log/working-with-csv.md index 3ebc381c9..be04de642 100644 --- a/content/guides/models/track/log/working-with-csv.md +++ b/content/guides/models/track/log/working-with-csv.md @@ -43,7 +43,7 @@ iris_table_artifact.add(iris_table, "iris_table") # Log the raw csv file within an artifact to preserve our data iris_table_artifact.add_file("iris.csv") ``` -For more information about W&B Artifacts, see the [Artifacts chapter](../../artifacts/intro/). +For more information about W&B Artifacts, see the [Artifacts chapter](../../artifacts/). 4. Lastly, start a new W&B Run to track and log to W&B with `wandb.init`: @@ -107,7 +107,7 @@ In some cases, you might have your experiment details in a CSV file. Common deta * A name for the experiment run * Initial [notes](../../runs/intro.md#add-a-note-to-a-run) * [Tags](../../runs/tags/) to differentiate the experiments -* Configurations needed for your experiment (with the added benefit of being able to utilize our [Sweeps Hyperparameter Tuning](../../sweeps/intro/)). +* Configurations needed for your experiment (with the added benefit of being able to utilize our [Sweeps Hyperparameter Tuning](../../sweeps/)). | Experiment | Model Name | Notes | Tags | Num Layers | Final Train Acc | Final Val Acc | Training Losses | | ------------ | ---------------- | ------------------------------------------------ | ------------- | ---------- | --------------- | ------------- | ------------------------------------- | diff --git a/content/guides/models/track/project-page.md b/content/guides/models/track/project-page.md index b19860abf..0bd85c808 100644 --- a/content/guides/models/track/project-page.md +++ b/content/guides/models/track/project-page.md @@ -173,13 +173,13 @@ See all the snapshots of results in one place, and share findings with your team ## Sweeps tab -Start a new [sweep](../sweeps/intro/) from your project. +Start a new [sweep](../sweeps/) from your project. {{< img src="/images/app_ui/sweeps-tab.png" alt="" >}} ## Artifacts tab -View all the [artifacts](../artifacts/intro/) associated with a project, from training datasets and [fine-tuned models](../model_registry/intro/) to [tables of metrics and media](../tables/tables-walkthrough/). +View all the [artifacts](../artifacts/) associated with a project, from training datasets and [fine-tuned models](../model_registry/) to [tables of metrics and media](../tables/tables-walkthrough/). ### Overview panel diff --git a/content/guides/models/track/runs/_index.md b/content/guides/models/track/runs/_index.md index 11a5db336..def1e0662 100644 --- a/content/guides/models/track/runs/_index.md +++ b/content/guides/models/track/runs/_index.md @@ -19,7 +19,7 @@ Common patterns for initiating a run include, but are not limited to: * Training a model * Changing a hyperparameter and conducting a new experiment * Conducting a new machine learning experiment with a different model -* Logging data or a model as a [W&B Artifact](../artifacts/intro/) +* Logging data or a model as a [W&B Artifact](../artifacts/) * [Downloading a W&B Artifact](../artifacts/download-and-use-an-artifact/) @@ -114,7 +114,7 @@ Note that W&B captures the simulated training loop within a single run called `j {{< img src="/images/runs/run_log_example_2.png" alt="" >}} -As another example, during a [sweep](../sweeps/intro/), W&B explores a hyperparameter search space that you specify. W&B implements each new hyperparameter combination that the sweep creates as a unique run. +As another example, during a [sweep](../sweeps/), W&B explores a hyperparameter search space that you specify. W&B implements each new hyperparameter combination that the sweep creates as a unique run. ## Initialize a run @@ -341,7 +341,7 @@ W&B stores the proceeding information below the overview section: * **Artifact Outputs**: Artifact outputs produced by the run. * **Config**: List of config parameters saved with [`wandb.config`](../../guides/track/config/). -* **Summary**: List of summary parameters saved with [`wandb.log()`](../../guides/track/log/intro/). By default, W&B sets this value to the last value logged. +* **Summary**: List of summary parameters saved with [`wandb.log()`](../../guides/track/log/). By default, W&B sets this value to the last value logged. {{< img src="/images/app_ui/wandb_run_overview_page.png" alt="W&B Dashboard run overview tab" >}} @@ -381,7 +381,7 @@ Use the **Files tab** to view files associated with a specific run such as model View an example files tab [here](https://app.wandb.ai/stacey/deep-drive/runs/pr0os44x/files/media/images). ### Artifacts tab -The **Artifacts** tab lists the input and output [artifacts](../artifacts/intro/) for the specified run. +The **Artifacts** tab lists the input and output [artifacts](../artifacts/) for the specified run. {{< img src="/images/app_ui/artifacts_tab.png" alt="" >}} diff --git a/content/guides/models/track/runs/alert.md b/content/guides/models/track/runs/alert.md index 81f71b554..971f78645 100644 --- a/content/guides/models/track/runs/alert.md +++ b/content/guides/models/track/runs/alert.md @@ -21,7 +21,7 @@ And then see W&B Alerts messages in Slack (or your email): {{% alert %}} The following guide only applies to alerts in multi-tenant cloud. -If you're using [W&B Server](../hosting/intro/) in your Private Cloud or on W&B Dedicated Cloud, then please refer to [this documentation](../hosting/monitoring-usage/slack-alerts/) to setup Slack alerts. +If you're using [W&B Server](../hosting/) in your Private Cloud or on W&B Dedicated Cloud, then please refer to [this documentation](../hosting/monitoring-usage/slack-alerts/) to setup Slack alerts. {{% /alert %}} diff --git a/content/guides/models/track/runs/resuming.md b/content/guides/models/track/runs/resuming.md index e136f4db4..2577f0dff 100644 --- a/content/guides/models/track/runs/resuming.md +++ b/content/guides/models/track/runs/resuming.md @@ -119,7 +119,7 @@ If you can not share a filesystem, specify the `WANDB_RUN_ID` environment variab ## Resume preemptible Sweeps runs -Automatically requeue interrupted [sweep](../sweeps/intro/) runs. This is particularly useful if you run a sweep agent in a compute environment that is subject to preemption such as a SLURM job in a preemptible queue, an EC2 spot instance, or a Google Cloud preemptible VM. +Automatically requeue interrupted [sweep](../sweeps/) runs. This is particularly useful if you run a sweep agent in a compute environment that is subject to preemption such as a SLURM job in a preemptible queue, an EC2 spot instance, or a Google Cloud preemptible VM. Use the [`mark_preempting`](../../ref/python/run.md#mark_preempting) function to enable W&B to automatically requeue interrupted sweep runs. For example, the following code snippet diff --git a/content/guides/models/track/workspaces.md b/content/guides/models/track/workspaces.md index c0838513c..850dffc41 100644 --- a/content/guides/models/track/workspaces.md +++ b/content/guides/models/track/workspaces.md @@ -11,7 +11,7 @@ weight: 4 W&B workspace is your personal sandbox to customize charts and explore model results. A W&B workspace consists of *Tables* and *Panel sections*: * **Tables**: All runs logged to your project are listed in the project's table. Turn on and off runs, change colors, and expand the table to see notes, config, and summary metrics for each run. -* **Panel sections**: A section that contains one or more [panels](../app/features/panels/intro/). Create new panels, organize them, and export to reports to save snapshots of your workspace. +* **Panel sections**: A section that contains one or more [panels](../app/features/panels/). Create new panels, organize them, and export to reports to save snapshots of your workspace. {{< img src="/images/app_ui/workspace_table_and_panels.png" alt="" >}} diff --git a/content/guides/quickstart.md b/content/guides/quickstart.md index 6b7c85705..ff48a0570 100644 --- a/content/guides/quickstart.md +++ b/content/guides/quickstart.md @@ -48,7 +48,7 @@ Next, log in to W&B: wandb login ``` -Or if you are using [W&B Server](./guides/hosting/intro/) (including **Dedicated Cloud** or **Self-managed**): +Or if you are using [W&B Server](./guides/hosting/) (including **Dedicated Cloud** or **Self-managed**): ```bash wandb login --relogin --host=http://your-shared-local-host.com @@ -87,7 +87,7 @@ run = wandb.init( ``` -A [run](./guides/runs/intro/) is the basic building block of W&B. You will use them often to [track metrics](./guides/track/intro/), [create logs](./guides/artifacts/intro/), and more. +A [run](./guides/runs/) is the basic building block of W&B. You will use them often to [track metrics](./guides/track/), [create logs](./guides/artifacts/), and more. @@ -146,11 +146,11 @@ The image above (click to expand) shows the loss and accuracy that was tracked f Explore the rest of the W&B ecosystem. -1. Check out [W&B Integrations](guides/integrations/intro/) to learn how to integrate W&B with your ML framework such as PyTorch, ML library such as Hugging Face, or ML service such as SageMaker. -2. Organize runs, embed and automate visualizations, describe your findings, and share updates with collaborators with [W&B Reports](./guides/reports/intro/). -2. Create [W&B Artifacts](./guides/artifacts/intro/) to track datasets, models, dependencies, and results through each step of your machine learning pipeline. -3. Automate hyperparameter search and explore the space of possible models with [W&B Sweeps](./guides/sweeps/intro/). -4. Understand your datasets, visualize model predictions, and share insights in a [central dashboard](./guides/tables/intro/). +1. Check out [W&B Integrations](guides/integrations/) to learn how to integrate W&B with your ML framework such as PyTorch, ML library such as Hugging Face, or ML service such as SageMaker. +2. Organize runs, embed and automate visualizations, describe your findings, and share updates with collaborators with [W&B Reports](./guides/reports/). +2. Create [W&B Artifacts](./guides/artifacts/) to track datasets, models, dependencies, and results through each step of your machine learning pipeline. +3. Automate hyperparameter search and explore the space of possible models with [W&B Sweeps](./guides/sweeps/). +4. Understand your datasets, visualize model predictions, and share insights in a [central dashboard](./guides/tables/). 5. Navigate to W&B AI Academy and learn about LLMs, MLOps and W&B Models from hands-on [courses](https://wandb.me/courses). {{< img src="/images/quickstart/wandb_demo_experiments.gif" alt="" >}} \ No newline at end of file diff --git a/content/launch/integration-guides/dagster.md b/content/launch/integration-guides/dagster.md index d86922093..349e83948 100644 --- a/content/launch/integration-guides/dagster.md +++ b/content/launch/integration-guides/dagster.md @@ -9,9 +9,9 @@ url: guides/integrations/dagster --- Use Dagster and W&B (W&B) to orchestrate your MLOps pipelines and maintain ML assets. The integration with W&B makes it easy within Dagster to: -* Use and create [W&B Artifacts](../artifacts/intro/). -* Use and create Registered Models in [W&B Model Registry](../model_registry/intro/). -* Run training jobs on dedicated compute using [W&B Launch](../launch/intro/). +* Use and create [W&B Artifacts](../artifacts/). +* Use and create Registered Models in [W&B Model Registry](../model_registry/). +* Run training jobs on dedicated compute using [W&B Launch](../launch/). * Use the [wandb](../../ref/python/README/) client in ops and assets. The W&B Dagster integration provides a W&B-specific Dagster resource and IO Manager: @@ -25,7 +25,7 @@ The following guide demonstrates how to satisfy prerequisites to use W&B in Dags You will need the following resources to use Dagster within Weights and Biases: 1. **W&B API Key**. 2. **W&B entity (user or team)**: An entity is a username or team name where you send W&B Runs and Artifacts. Make sure to create your account or team entity in the W&B App UI before you log runs. If you do not specify ain entity, the run will be sent to your default entity, which is usually your username. Change your default entity in your settings under **Project Defaults**. -3. **W&B project**: The name of the project where [W&B Runs](../runs/intro/) are stored. +3. **W&B project**: The name of the project where [W&B Runs](../runs/) are stored. Find your W&B entity by checking the profile page for that user or team in the W&B App. You can use a pre-existing W&B project or create a new one. New projects can be created on the W&B App homepage or on user/team profile page. If a project does not exist it will be automatically created when you first use it. The proceeding instructions demonstrate how to get an API key: @@ -154,7 +154,7 @@ def create_dataset(): You can annotate your `@op`, `@asset` and `@multi_asset` with a metadata configuration in order to write Artifacts. Similarly you can also consume W&B Artifacts even if they were created outside Dagster. ## Write W&B Artifacts -Before continuing, we recommend you to have a good understanding of how to use W&B Artifacts. Consider reading the [Guide on Artifacts](../artifacts/intro/). +Before continuing, we recommend you to have a good understanding of how to use W&B Artifacts. Consider reading the [Guide on Artifacts](../artifacts/). Return an object from a Python function to write a W&B Artifact. The following objects are supported by W&B: * Python objects (int, dict, list…) @@ -849,7 +849,7 @@ The integration provides an importable `@op` called `run_launch_agent`. It start Agents are processes that poll launch queues and execute the jobs (or dispatch them to external services to be executed) in order. -Refer to the [reference documentation](../launch/intro/) for configuration +Refer to the [reference documentation](../launch/) for configuration You can also view useful descriptions for all properties in Launchpad. @@ -897,7 +897,7 @@ The integration provides an importable `@op` called `run_launch_job`. It execute A Launch job is assigned to a queue in order to be executed. You can create a queue or use the default one. Make sure you have an active agent listening to that queue. You can run an agent inside your Dagster instance but can also consider using a deployable agent in Kubernetes. -Refer to the [reference documentation](../launch/intro/) for configuration. +Refer to the [reference documentation](../launch/) for configuration. You can also view useful descriptions for all properties in Launchpad. diff --git a/content/launch/launch-terminology.md b/content/launch/launch-terminology.md index 3842d3152..21d0718a9 100644 --- a/content/launch/launch-terminology.md +++ b/content/launch/launch-terminology.md @@ -8,10 +8,10 @@ url: guides/launch/launch-terminology weight: 2 --- -With W&B Launch, you enqueue [jobs](#launch-job) onto [queues](#launch-queue) to create runs. Jobs are python scripts instrumented with W&B. Queues hold a list of jobs to execute on a [target resource](#target-resources). [Agents](#launch-agent) pull jobs from queues and execute the jobs on target resources. W&B tracks launch jobs similarly to how W&B tracks [runs](../runs/intro/). +With W&B Launch, you enqueue [jobs](#launch-job) onto [queues](#launch-queue) to create runs. Jobs are python scripts instrumented with W&B. Queues hold a list of jobs to execute on a [target resource](#target-resources). [Agents](#launch-agent) pull jobs from queues and execute the jobs on target resources. W&B tracks launch jobs similarly to how W&B tracks [runs](../runs/). ### Launch job -A launch job is a specific type of [W&B Artifact](../artifacts/intro/) that represents a task to complete. For example, common launch jobs include training a model or triggering a model evaluation. Job definitions include: +A launch job is a specific type of [W&B Artifact](../artifacts/) that represents a task to complete. For example, common launch jobs include training a model or triggering a model evaluation. Job definitions include: - Python code and other file assets, including at least one runnable entrypoint. - Information about the input (config parameter) and output (metrics logged). diff --git a/content/launch/sweeps-on-launch.md b/content/launch/sweeps-on-launch.md index e17460cca..a69f18858 100644 --- a/content/launch/sweeps-on-launch.md +++ b/content/launch/sweeps-on-launch.md @@ -9,11 +9,11 @@ url: guides/launch/sweeps-on-launch --- {{< cta-button colabLink="https://colab.research.google.com/drive/1WxLKaJlltThgZyhc7dcZhDQ6cjVQDfil#scrollTo=AFEzIxA6foC7" >}} -Create a hyperparameter tuning job ([sweeps](../sweeps/intro/)with W&B Launch. With sweeps on launch, a sweep scheduler is pushed to a Launch Queue with the specified hyperparameters to sweep over. The sweep scheduler starts as it is picked up by the agent, launching sweep runs onto the same queue with chosen hyperparameters. This continues until the sweep finishes or is stopped. +Create a hyperparameter tuning job ([sweeps](../sweeps/)with W&B Launch. With sweeps on launch, a sweep scheduler is pushed to a Launch Queue with the specified hyperparameters to sweep over. The sweep scheduler starts as it is picked up by the agent, launching sweep runs onto the same queue with chosen hyperparameters. This continues until the sweep finishes or is stopped. You can use the default W&B Sweep scheduling engine or implement your own custom scheduler: -1. Standard sweep scheduler: Use the default W&B Sweep scheduling engine that controls [W&B Sweeps](../sweeps/intro/)The familiar `bayes`, `grid`, and `random` methods are available. +1. Standard sweep scheduler: Use the default W&B Sweep scheduling engine that controls [W&B Sweeps](../sweeps/)The familiar `bayes`, `grid`, and `random` methods are available. 2. Custom sweep scheduler: Configure the sweep scheduler to run as a job. This option enables full customization. An example of how to extend the standard sweep scheduler to include more logging can be found in the section below. {{% alert %}} @@ -108,7 +108,7 @@ For information on how to create a sweep configuration, see the [Define sweep co wandb launch-sweep --queue --entity --project ``` -For more information on W&B Sweeps, see the [Tune Hyperparameters](../sweeps/intro/)hapter. +For more information on W&B Sweeps, see the [Tune Hyperparameters](../sweeps/)hapter. {{% /tab %}} {{< /tabpane >}} diff --git a/content/launch/walkthrough.md b/content/launch/walkthrough.md index e8864899f..75873fb71 100644 --- a/content/launch/walkthrough.md +++ b/content/launch/walkthrough.md @@ -12,7 +12,7 @@ weight: 1 {{< cta-button colabLink="https://colab.research.google.com/drive/1wX0OSVxZJDHRsZaOaOEDx-lLUrO1hHgP" >}} -Easily scale training [runs](../runs/intro/) from your desktop to a compute resource like Amazon SageMaker, Kubernetes and more with W&B Launch. Once W&B Launch is configured, you can quickly run training scripts, model evaluation suites, prepare models for production inference, and more with a few clicks and commands. +Easily scale training [runs](../runs/) from your desktop to a compute resource like Amazon SageMaker, Kubernetes and more with W&B Launch. Once W&B Launch is configured, you can quickly run training scripts, model evaluation suites, prepare models for production inference, and more with a few clicks and commands. ## How it works diff --git a/content/ref/_index.md b/content/ref/_index.md index faf926d90..57452ad48 100644 --- a/content/ref/_index.md +++ b/content/ref/_index.md @@ -27,4 +27,4 @@ These docs are automatically generated from the [`wandb` library](https://github [Our examples repo](https://github.com/wandb/examples) has scripts and colabs to try W&B features, and see integrations with various libraries. -[Our developer guide](../guides/intro/) has guides, tutorials, and FAQs for the various W&B products. +[Our developer guide](../guides/) has guides, tutorials, and FAQs for the various W&B products. diff --git a/content/support/best_log_models_runs_sweep.md b/content/support/best_log_models_runs_sweep.md index f9898f120..4d734f406 100644 --- a/content/support/best_log_models_runs_sweep.md +++ b/content/support/best_log_models_runs_sweep.md @@ -6,7 +6,7 @@ tags: - artifacts - sweeps --- -One effective approach for logging models in a [sweep](../guides/sweeps/intro/) involves creating a model artifact for the sweep. Each version represents a different run from the sweep. Implement it as follows: +One effective approach for logging models in a [sweep](../guides/sweeps/) involves creating a model artifact for the sweep. Each version represents a different run from the sweep. Implement it as follows: ```python wandb.Artifact(name="sweep_name", type="model") diff --git a/content/support/deal_network_issues.md b/content/support/deal_network_issues.md index e0e88cb8f..25e2319ba 100644 --- a/content/support/deal_network_issues.md +++ b/content/support/deal_network_issues.md @@ -9,7 +9,7 @@ If you encounter SSL or network errors, such as `wandb: Network error (Connectio 1. Upgrade the SSL certificate. On an Ubuntu server, run `update-ca-certificates`. A valid SSL certificate is essential for syncing training logs to mitigate security risks. 2. If the network connection is unstable, operate in offline mode by setting the [optional environment variable](../guides/track/environment-variables.md#optional-environment-variables) `WANDB_MODE` to `offline`, and sync files later from a device with Internet access. -3. Consider using [W&B Private Hosting](../guides/hosting/intro/), which runs locally and avoids syncing to cloud servers. +3. Consider using [W&B Private Hosting](../guides/hosting/), which runs locally and avoids syncing to cloud servers. For the `SSL CERTIFICATE_VERIFY_FAILED` error, this issue might stem from a company firewall. Configure local CAs and execute: diff --git a/content/support/log_additional_metrics_run_completes.md b/content/support/log_additional_metrics_run_completes.md index 5ab069ebb..b0abe10a2 100644 --- a/content/support/log_additional_metrics_run_completes.md +++ b/content/support/log_additional_metrics_run_completes.md @@ -10,5 +10,5 @@ There are several ways to manage experiments. For complex workflows, use multiple runs and set the group parameters in [`wandb.init`](../guides/track/launch/) to a unique value for all processes within a single experiment. The [**Runs** tab](../guides/track/project-page.md#runs-tab) will group the table by group ID, ensuring that visualizations function properly. This approach enables concurrent experiments and training runs while logging results in one location. -For simpler workflows, call `wandb.init` with `resume=True` and `id=UNIQUE_ID`, then call `wandb.init` again with the same `id=UNIQUE_ID`. Log normally with [`wandb.log`](../guides/track/log/intro/) or `wandb.summary`, and the run values will update accordingly. +For simpler workflows, call `wandb.init` with `resume=True` and `id=UNIQUE_ID`, then call `wandb.init` again with the same `id=UNIQUE_ID`. Log normally with [`wandb.log`](../guides/track/log/) or `wandb.summary`, and the run values will update accordingly. diff --git a/content/tutorials/experiments.md b/content/tutorials/experiments.md index f21418b81..5e329a65a 100644 --- a/content/tutorials/experiments.md +++ b/content/tutorials/experiments.md @@ -7,7 +7,7 @@ weight: 1 --- {{< cta-button - colabLink="https://colab.research.google.com/github/wandb/examples/blob/master/colabs/intro/Intro_to_Weights_&_Biases.ipynb" + colabLink="https://colab.research.google.com/github/wandb/examples/blob/master/colabs/Intro_to_Weights_&_Biases.ipynb" >}} Use [W&B](https://wandb.ai/site) for machine learning experiment tracking, model checkpointing, collaboration with your team and more. See the full W&B Documentation [here](/). From 65323fccc50eb6f480a77d5284bba9bfe556fc4c Mon Sep 17 00:00:00 2001 From: johndmulhausen Date: Tue, 14 Jan 2025 13:30:29 -0500 Subject: [PATCH 4/5] Restore "intro" to colabs links --- content/guides/core/reports/_index.md | 8 ++++---- .../core/reports/clone-and-export-reports.md | 2 +- content/guides/core/reports/create-a-report.md | 2 +- content/guides/integrations/keras.md | 4 ++-- content/guides/integrations/pytorch.md | 14 +++++++------- 5 files changed, 15 insertions(+), 15 deletions(-) diff --git a/content/guides/core/reports/_index.md b/content/guides/core/reports/_index.md index fab5f591c..b1fe9d798 100644 --- a/content/guides/core/reports/_index.md +++ b/content/guides/core/reports/_index.md @@ -12,7 +12,7 @@ cascade: --- -{{< cta-button productLink="https://wandb.ai/stacey/deep-drive/reports/The-View-from-the-Driver-s-Seat--Vmlldzo1MTg5NQ?utm_source=fully_connected&utm_medium=blog&utm_campaign=view+from+the+drivers+seat" colabLink="https://colab.research.google.com/github/wandb/examples/blob/master/colabs/Report_API_Quickstart.ipynb" >}} +{{< cta-button productLink="https://wandb.ai/stacey/deep-drive/reports/The-View-from-the-Driver-s-Seat--Vmlldzo1MTg5NQ?utm_source=fully_connected&utm_medium=blog&utm_campaign=view+from+the+drivers+seat" colabLink="https://colab.research.google.com/github/wandb/examples/blob/master/colabs/intro/Report_API_Quickstart.ipynb" >}} Use W&B Reports to: - Organize Runs. @@ -42,12 +42,12 @@ Create a collaborative report with a few clicks. 6. Click **Publish to project**. 7. Click the **Share** button to share your report with collaborators. -See the [Create a report](./create-a-report/) page for more information on how to create reports interactively an programmatically with the W&B Python SDK. +See the [Create a report](./create-a-report.md) page for more information on how to create reports interactively an programmatically with the W&B Python SDK. ## How to get started Depending on your use case, explore the following resources to get started with W&B Reports: * Check out our [video demonstration](https://www.youtube.com/watch?v=2xeJIv_K_eI) to get an overview of W&B Reports. -* Explore the [Reports gallery](./reports-gallery/) for examples of live reports. -* Try the [Programmatic Workspaces](../../tutorials/workspaces/) tutorial to learn how to create and customize your workspace. +* Explore the [Reports gallery](./reports-gallery.md) for examples of live reports. +* Try the [Programmatic Workspaces](../../tutorials/workspaces.md) tutorial to learn how to create and customize your workspace. * Read curated Reports in [W&B Fully Connected](http://wandb.me/fc). \ No newline at end of file diff --git a/content/guides/core/reports/clone-and-export-reports.md b/content/guides/core/reports/clone-and-export-reports.md index c377499e3..6af1c9502 100644 --- a/content/guides/core/reports/clone-and-export-reports.md +++ b/content/guides/core/reports/clone-and-export-reports.md @@ -26,7 +26,7 @@ Clone a report to reuse a project's template and format. Cloned projects are vis {{% tab header="Python SDK" value="python"%}} Load a Report from a URL to use it as a template. diff --git a/content/guides/core/reports/create-a-report.md b/content/guides/core/reports/create-a-report.md index 3ad6bbed1..21bcd69cd 100644 --- a/content/guides/core/reports/create-a-report.md +++ b/content/guides/core/reports/create-a-report.md @@ -11,7 +11,7 @@ weight: 10 Create a report interactively with the W&B App UI or programmatically with the W&B Python SDK. {{% alert %}} -See this [Google Colab for an example](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/Report_API_Quickstart.ipynb). +See this [Google Colab for an example](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/intro/Report_API_Quickstart.ipynb). {{% /alert %}} {{< tabpane text=true >}} diff --git a/content/guides/integrations/keras.md b/content/guides/integrations/keras.md index c0dabe031..33039ede7 100644 --- a/content/guides/integrations/keras.md +++ b/content/guides/integrations/keras.md @@ -6,7 +6,7 @@ menu: title: Keras weight: 160 --- -{{< cta-button colabLink="https://colab.research.google.com/github/wandb/examples/blob/master/colabs/Intro_to_Weights_%26_Biases_keras.ipynb" >}} +{{< cta-button colabLink="https://colab.research.google.com/github/wandb/examples/blob/master/colabs/intro/Intro_to_Weights_%26_Biases_keras.ipynb" >}} ## Keras callbacks @@ -258,7 +258,7 @@ See our [example repo](https://github.com/wandb/examples) for scripts, including The `WandbCallback` class supports a wide variety of logging configuration options: specifying a metric to monitor, tracking of weights and gradients, logging of predictions on training_data and validation_data, and more. -Check out [the reference documentation for the `keras.WandbCallback`](../../ref/python/integrations/keras/wandbcallback/) for full details. +Check out [the reference documentation for the `keras.WandbCallback`](../../ref/python/integrations/keras/wandbcallback.md) for full details. The `WandbCallback` diff --git a/content/guides/integrations/pytorch.md b/content/guides/integrations/pytorch.md index 7955ac41c..5dcc51fb0 100644 --- a/content/guides/integrations/pytorch.md +++ b/content/guides/integrations/pytorch.md @@ -6,7 +6,7 @@ menu: title: PyTorch weight: 300 --- -{{< cta-button colabLink="https://colab.research.google.com/github/wandb/examples/blob/master/colabs/Intro_to_Weights_%26_Biases.ipynb" >}} +{{< cta-button colabLink="https://colab.research.google.com/github/wandb/examples/blob/master/colabs/intro/Intro_to_Weights_%26_Biases.ipynb" >}} PyTorch is one of the most popular frameworks for deep learning in Python, especially among researchers. W&B provides first class support for PyTorch, from logging gradients to profiling your code on the CPU and GPU. @@ -18,7 +18,7 @@ You can also see our [example repo](https://github.com/wandb/examples) for scrip ## Log gradients with `wandb.watch` -To automatically log gradients, you can call [`wandb.watch`](../../ref/python/watch/) and pass in your PyTorch model. +To automatically log gradients, you can call [`wandb.watch`](../../ref/python/watch.md) and pass in your PyTorch model. ```python import wandb @@ -40,7 +40,7 @@ for batch_idx, (data, target) in enumerate(train_loader): wandb.log({"loss": loss}) ``` -If you need to track multiple models in the same script, you can call `wandb.watch` on each model separately. Reference documentation for this function is [here](../../ref/python/watch/). +If you need to track multiple models in the same script, you can call `wandb.watch` on each model separately. Reference documentation for this function is [here](../../ref/python/watch.md). {{% alert color="secondary" %}} Gradients, metrics, and the graph won't be logged until `wandb.log` is called after a forward _and_ backward pass. @@ -48,14 +48,14 @@ Gradients, metrics, and the graph won't be logged until `wandb.log` is called af ## Log images and media -You can pass PyTorch `Tensors` with image data into [`wandb.Image`](../../ref/python/data-types/image/) and utilities from [`torchvision`](https://pytorch.org/vision/stable/index.html) will be used to convert them to images automatically: +You can pass PyTorch `Tensors` with image data into [`wandb.Image`](../../ref/python/data-types/image.md) and utilities from [`torchvision`](https://pytorch.org/vision/stable/index.html) will be used to convert them to images automatically: ```python images_t = ... # generate or load images as PyTorch Tensors wandb.log({"examples": [wandb.Image(im) for im in images_t]}) ``` -For more on logging rich media to W&B in PyTorch and other frameworks, check out our [media logging guide](../track/log/media/). +For more on logging rich media to W&B in PyTorch and other frameworks, check out our [media logging guide](../track/log/media.md). If you also want to include information alongside media, like your model's predictions or derived metrics, use a `wandb.Table`. @@ -72,13 +72,13 @@ wandb.log({"mnist_predictions": my_table}) {{< img src="/images/integrations/pytorch_example_table.png" alt="The code above generates a table like this one. This model's looking good!" >}} -For more on logging and visualizing datasets and models, check out our [guide to W&B Tables](../tables/). +For more on logging and visualizing datasets and models, check out our [guide to W&B Tables](../tables/intro.md). ## Profile PyTorch code {{< img src="/images/integrations/pytorch_example_dashboard.png" alt="View detailed traces of PyTorch code execution inside W&B dashboards." >}} -W&B integrates directly with [PyTorch Kineto](https://github.com/pytorch/kineto)'s [Tensorboard plugin](https://github.com/pytorch/kineto/blob/master/tb_plugin/README/) to provide tools for profiling PyTorch code, inspecting the details of CPU and GPU communication, and identifying bottlenecks and optimizations. +W&B integrates directly with [PyTorch Kineto](https://github.com/pytorch/kineto)'s [Tensorboard plugin](https://github.com/pytorch/kineto/blob/master/tb_plugin/README.md) to provide tools for profiling PyTorch code, inspecting the details of CPU and GPU communication, and identifying bottlenecks and optimizations. ```python profile_dir = "path/to/run/tbprofile/" From b5ae2c489fe97b2ceb5e340a835c06611344facb Mon Sep 17 00:00:00 2001 From: johndmulhausen Date: Tue, 14 Jan 2025 13:30:32 -0500 Subject: [PATCH 5/5] Restore "intro" to colabs links - part 2 --- content/guides/core/reports/_index.md | 6 +++--- content/guides/integrations/keras.md | 2 +- content/guides/integrations/pytorch.md | 10 +++++----- 3 files changed, 9 insertions(+), 9 deletions(-) diff --git a/content/guides/core/reports/_index.md b/content/guides/core/reports/_index.md index b1fe9d798..e90ed2746 100644 --- a/content/guides/core/reports/_index.md +++ b/content/guides/core/reports/_index.md @@ -42,12 +42,12 @@ Create a collaborative report with a few clicks. 6. Click **Publish to project**. 7. Click the **Share** button to share your report with collaborators. -See the [Create a report](./create-a-report.md) page for more information on how to create reports interactively an programmatically with the W&B Python SDK. +See the [Create a report](./create-a-report/) page for more information on how to create reports interactively an programmatically with the W&B Python SDK. ## How to get started Depending on your use case, explore the following resources to get started with W&B Reports: * Check out our [video demonstration](https://www.youtube.com/watch?v=2xeJIv_K_eI) to get an overview of W&B Reports. -* Explore the [Reports gallery](./reports-gallery.md) for examples of live reports. -* Try the [Programmatic Workspaces](../../tutorials/workspaces.md) tutorial to learn how to create and customize your workspace. +* Explore the [Reports gallery](./reports-gallery/) for examples of live reports. +* Try the [Programmatic Workspaces](../../tutorials/workspaces/) tutorial to learn how to create and customize your workspace. * Read curated Reports in [W&B Fully Connected](http://wandb.me/fc). \ No newline at end of file diff --git a/content/guides/integrations/keras.md b/content/guides/integrations/keras.md index 33039ede7..824ea016c 100644 --- a/content/guides/integrations/keras.md +++ b/content/guides/integrations/keras.md @@ -258,7 +258,7 @@ See our [example repo](https://github.com/wandb/examples) for scripts, including The `WandbCallback` class supports a wide variety of logging configuration options: specifying a metric to monitor, tracking of weights and gradients, logging of predictions on training_data and validation_data, and more. -Check out [the reference documentation for the `keras.WandbCallback`](../../ref/python/integrations/keras/wandbcallback.md) for full details. +Check out [the reference documentation for the `keras.WandbCallback`](../../ref/python/integrations/keras/wandbcallback/) for full details. The `WandbCallback` diff --git a/content/guides/integrations/pytorch.md b/content/guides/integrations/pytorch.md index 5dcc51fb0..133feb2e9 100644 --- a/content/guides/integrations/pytorch.md +++ b/content/guides/integrations/pytorch.md @@ -18,7 +18,7 @@ You can also see our [example repo](https://github.com/wandb/examples) for scrip ## Log gradients with `wandb.watch` -To automatically log gradients, you can call [`wandb.watch`](../../ref/python/watch.md) and pass in your PyTorch model. +To automatically log gradients, you can call [`wandb.watch`](../../ref/python/watch/) and pass in your PyTorch model. ```python import wandb @@ -40,7 +40,7 @@ for batch_idx, (data, target) in enumerate(train_loader): wandb.log({"loss": loss}) ``` -If you need to track multiple models in the same script, you can call `wandb.watch` on each model separately. Reference documentation for this function is [here](../../ref/python/watch.md). +If you need to track multiple models in the same script, you can call `wandb.watch` on each model separately. Reference documentation for this function is [here](../../ref/python/watch/). {{% alert color="secondary" %}} Gradients, metrics, and the graph won't be logged until `wandb.log` is called after a forward _and_ backward pass. @@ -48,14 +48,14 @@ Gradients, metrics, and the graph won't be logged until `wandb.log` is called af ## Log images and media -You can pass PyTorch `Tensors` with image data into [`wandb.Image`](../../ref/python/data-types/image.md) and utilities from [`torchvision`](https://pytorch.org/vision/stable/index.html) will be used to convert them to images automatically: +You can pass PyTorch `Tensors` with image data into [`wandb.Image`](../../ref/python/data-types/image/) and utilities from [`torchvision`](https://pytorch.org/vision/stable/index.html) will be used to convert them to images automatically: ```python images_t = ... # generate or load images as PyTorch Tensors wandb.log({"examples": [wandb.Image(im) for im in images_t]}) ``` -For more on logging rich media to W&B in PyTorch and other frameworks, check out our [media logging guide](../track/log/media.md). +For more on logging rich media to W&B in PyTorch and other frameworks, check out our [media logging guide](../track/log/media/). If you also want to include information alongside media, like your model's predictions or derived metrics, use a `wandb.Table`. @@ -72,7 +72,7 @@ wandb.log({"mnist_predictions": my_table}) {{< img src="/images/integrations/pytorch_example_table.png" alt="The code above generates a table like this one. This model's looking good!" >}} -For more on logging and visualizing datasets and models, check out our [guide to W&B Tables](../tables/intro.md). +For more on logging and visualizing datasets and models, check out our [guide to W&B Tables](../tables/intro/). ## Profile PyTorch code