diff --git a/articles/ai-services/.openpublishing.redirection.ai-services.json b/articles/ai-services/.openpublishing.redirection.ai-services.json index 92e5cbf77d7..b3c45cdb501 100644 --- a/articles/ai-services/.openpublishing.redirection.ai-services.json +++ b/articles/ai-services/.openpublishing.redirection.ai-services.json @@ -315,6 +315,11 @@ "redirect_url": "/azure/ai-services/computer-vision/sdk/install-sdk", "redirect_document_id": true }, + { + "source_path_from_root": "/articles/ai-services/computer-vision/how-to/migrate-from-custom-vision.md", + "redirect_url": "/azure/ai-services/computer-vision/how-to/model-customization", + "redirect_document_id": false + }, { "source_path_from_root": "/articles/ai-services/document-intelligence/concept-document-intelligence-studio.md", "redirect_url": "/azure/ai-services/document-intelligence/studio-overview", diff --git a/articles/ai-services/computer-vision/concept-model-customization.md b/articles/ai-services/computer-vision/concept-model-customization.md index 51da5d0a25b..d932e480f80 100644 --- a/articles/ai-services/computer-vision/concept-model-customization.md +++ b/articles/ai-services/computer-vision/concept-model-customization.md @@ -14,6 +14,8 @@ ms.author: pafarley # Model customization (version 4.0 preview) +[!INCLUDE [model-customization-deprecation](includes/model-customization-deprecation.md)] + Model customization lets you train a specialized Image Analysis model for your own use case. Custom models can do either image classification (tags apply to the whole image) or object detection (tags apply to specific areas of the image). Once your custom model is created and trained, it belongs to your Vision resource, and you can call it using the [Analyze Image API](./how-to/call-analyze-image-40.md). Implement model customization quickly and easily by following a quickstart: diff --git a/articles/ai-services/computer-vision/concept-shelf-analysis.md b/articles/ai-services/computer-vision/concept-shelf-analysis.md index 3480bbf9c0d..82921784a91 100644 --- a/articles/ai-services/computer-vision/concept-shelf-analysis.md +++ b/articles/ai-services/computer-vision/concept-shelf-analysis.md @@ -15,6 +15,8 @@ ms.custom: build-2023, build-2023-dataai # Product Recognition (version 4.0 preview) +[!INCLUDE [model-customization-deprecation](includes/model-customization-deprecation.md)] + The Product Recognition APIs let you analyze photos of shelves in a retail store. You can detect the presence of products and get their bounding box coordinates. Use it in combination with model customization to train a model to identify your specific products. You can also compare Product Recognition results to your store's planogram document. Try out the capabilities of Product Recognition quickly and easily in your browser using Vision Studio. diff --git a/articles/ai-services/computer-vision/how-to/coco-verification.md b/articles/ai-services/computer-vision/how-to/coco-verification.md index 2f179562fba..b6d1085c454 100644 --- a/articles/ai-services/computer-vision/how-to/coco-verification.md +++ b/articles/ai-services/computer-vision/how-to/coco-verification.md @@ -16,6 +16,8 @@ ms.author: pafarley +[!INCLUDE [model-customization-deprecation](../includes/model-customization-deprecation.md)] + > [!TIP] > This article is based on the Jupyter notebook _check_coco_annotation.ipynb_. **[Open in GitHub](https://github.com/Azure-Samples/cognitive-service-vision-model-customization-python-samples/blob/main/docs/check_coco_annotation.ipynb)**. diff --git a/articles/ai-services/computer-vision/how-to/migrate-from-custom-vision.md b/articles/ai-services/computer-vision/how-to/migrate-from-custom-vision.md deleted file mode 100644 index 36c5743932e..00000000000 --- a/articles/ai-services/computer-vision/how-to/migrate-from-custom-vision.md +++ /dev/null @@ -1,330 +0,0 @@ ---- -title: "Migrate a Custom Vision project to Image Analysis 4.0" -titleSuffix: Azure AI services -description: Learn how to generate an annotation file from an old Custom Vision project, so you can train a custom Image Analysis model on previous training data. -#services: cognitive-services -author: PatrickFarley -manager: nitinme -ms.service: azure-ai-vision -ms.custom: devx-track-python -ms.topic: how-to -ms.date: 01/19/2024 -ms.author: pafarley ---- - -# Migrate a Custom Vision project to Image Analysis 4.0 preview - -You can migrate an existing Azure AI Custom Vision project to the new Image Analysis 4.0 system. [Custom Vision](../../custom-vision-service/overview.md) is a model customization service that existed before Image Analysis 4.0. - -This guide uses Python code to take all of the training data from an existing Custom Vision project (images and their label data) and convert it to a COCO file. You can then import the COCO file into Vision Studio to train a custom Image Analysis model. See [Create and train a custom model](model-customization.md) and go to the section on importing a COCO file—you can follow the guide from there to the end. - -## Prerequisites - -* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/) -* [Python 3.x](https://www.python.org/) -* A Custom Vision resource where an existing project is stored. -* An Azure Storage resource - [Create one](/azure/storage/common/storage-account-create?tabs=azure-portal) - -#### [Jupyter Notebook](#tab/notebook) - -This notebook exports your image data and annotations from the workspace of a Custom Vision Service project to your own COCO file in a storage blob, ready for training with Image Analysis Model Customization. You can run the code in this section using a custom Python script, or you can download and run the [Notebook](https://github.com/Azure-Samples/cognitive-service-vision-model-customization-python-samples/blob/main/docs/export_cvs_data_to_blob_storage.ipynb) on a compatible platform. - - - -> [!TIP] -> Contents of _export_cvs_data_to_blob_storage.ipynb_. **[Open in GitHub](https://github.com/Azure-Samples/cognitive-service-vision-model-customization-python-samples/blob/main/docs/export_cvs_data_to_blob_storage.ipynb)**. - - -## Install the python samples package - -Run the following command to install the required python samples package: - -```python -pip install cognitive-service-vision-model-customization-python-samples -``` - -## Authentication - -Next, provide the credentials of your Custom Vision project and your blob storage container. - -You need to fill in the correct parameter values. You need the following information: - -- The name of the Azure Storage account you want to use with your new custom model project -- The key for that storage account -- The name of the container you want to use in that storage account -- Your Custom Vision training key -- Your Custom Vision endpoint URL -- The project ID of your Custom Vision project - -The Azure Storage credentials can be found on that resource's page in the Azure portal. The Custom Vision credentials can be found in the Custom Vision project settings page on the [Custom Vision web portal](https://customvision.ai). - - -```python -azure_storage_account_name = '' -azure_storage_account_key = '' -azure_storage_container_name = '' - -custom_vision_training_key = '' -custom_vision_endpoint = '' -custom_vision_project_id = '' -``` - -## Run the migration - -When you run the migration code, the Custom Vision training images will be saved to a `{project_name}_{project_id}/images` folder in your specified Azure blob storage container, and the COCO file will be saved to `{project_name}_{project_id}/train.json` in that same container. Both tagged and untagged images will be exported, including any **Negative**-tagged images. - -> [!IMPORTANT] -> Image Analysis Model Customization does not currently support **multilabel** classification training, buy you can still export data from a Custom Vision multilabel classification project. - -```python -from cognitive_service_vision_model_customization_python_samples import export_data -import logging -logging.getLogger().setLevel(logging.INFO) -logging.getLogger('azure.core.pipeline.policies.http_logging_policy').setLevel(logging.WARNING) - -n_process = 8 -export_data(azure_storage_account_name, azure_storage_account_key, azure_storage_container_name, custom_vision_endpoint, custom_vision_training_key, custom_vision_project_id, n_process) -``` - - - -#### [Python](#tab/python) - -## Install libraries - -This script requires certain Python libraries. Install them in your project directory with the following command. - -```bash -pip install azure-storage-blob azure-cognitiveservices-vision-customvision cffi -``` - -## Prepare the migration script - -Create a new Python file—_export-cvs-data-to-coco.py_, for example. Then open it in a text editor and paste in the following contents. - -```python -from typing import List, Union -from azure.cognitiveservices.vision.customvision.training import CustomVisionTrainingClient -from azure.cognitiveservices.vision.customvision.training.models import Image, ImageTag, ImageRegion, Project -from msrest.authentication import ApiKeyCredentials -import argparse -import time -import json -import pathlib -import logging -from azure.storage.blob import ContainerClient, BlobClient -import multiprocessing - - -N_PROCESS = 8 - - -def get_file_name(sub_folder, image_id): - return f'{sub_folder}/images/{image_id}' - - -def blob_copy(params): - container_client, sub_folder, image = params - blob_client: BlobClient = container_client.get_blob_client(get_file_name(sub_folder, image.id)) - blob_client.start_copy_from_url(image.original_image_uri) - return blob_client - - -def wait_for_completion(blobs, time_out=5): - pendings = blobs - time_break = 0.5 - while pendings and time_out > 0: - pendings = [b for b in pendings if b.get_blob_properties().copy.status == 'pending'] - if pendings: - logging.info(f'{len(pendings)} pending copies. wait for {time_break} seconds.') - time.sleep(time_break) - time_out -= time_break - - -def copy_images_with_retry(pool, container_client, sub_folder, images: List, batch_id, n_retries=5): - retry_limit = n_retries - urls = [] - while images and n_retries > 0: - params = [(container_client, sub_folder, image) for image in images] - img_and_blobs = zip(images, pool.map(blob_copy, params)) - logging.info(f'Batch {batch_id}: Copied {len(images)} images.') - urls = urls or [b.url for _, b in img_and_blobs] - - wait_for_completion([b for _, b in img_and_blobs]) - images = [image for image, b in img_and_blobs if b.get_blob_properties().copy.status in ['failed', 'aborted']] - n_retries -= 1 - if images: - time.sleep(0.5 * (retry_limit - n_retries)) - - if images: - raise RuntimeError(f'Copy failed for some images in batch {batch_id}') - - return urls - - -class CocoOperator: - def __init__(self): - self._images = [] - self._annotations = [] - self._categories = [] - self._category_name_to_id = {} - - @property - def num_imges(self): - return len(self._images) - - @property - def num_categories(self): - return len(self._categories) - - @property - def num_annotations(self): - return len(self._annotations) - - def add_image(self, width, height, coco_url, file_name): - self._images.append( - { - 'id': len(self._images) + 1, - 'width': width, - 'height': height, - 'coco_url': coco_url, - 'file_name': file_name, - }) - - def add_annotation(self, image_id, category_id_or_name: Union[int, str], bbox: List[float] = None): - self._annotations.append({ - 'id': len(self._annotations) + 1, - 'image_id': image_id, - 'category_id': category_id_or_name if isinstance(category_id_or_name, int) else self._category_name_to_id[category_id_or_name]}) - - if bbox: - self._annotations[-1]['bbox'] = bbox - - def add_category(self, name): - self._categories.append({ - 'id': len(self._categories) + 1, - 'name': name - }) - - self._category_name_to_id[name] = len(self._categories) - - def to_json(self) -> str: - coco_dict = { - 'images': self._images, - 'categories': self._categories, - 'annotations': self._annotations, - } - - return json.dumps(coco_dict, ensure_ascii=False, indent=2) - - -def log_project_info(training_client: CustomVisionTrainingClient, project_id): - project: Project = training_client.get_project(project_id) - proj_settings = project.settings - project.settings = None - logging.info(f'Project info dict: {project.__dict__}') - logging.info(f'Project setting dict: {proj_settings.__dict__}') - logging.info(f'Project info: n tags: {len(training_client.get_tags(project_id))},' - f' n images: {training_client.get_image_count(project_id)} (tagged: {training_client.get_tagged_image_count(project_id)},' - f' untagged: {training_client.get_untagged_image_count(project_id)})') - - -def export_data(azure_storage_account_name, azure_storage_key, azure_storage_container_name, custom_vision_endpoint, custom_vision_training_key, custom_vision_project_id, n_process): - azure_storage_account_url = f"https://{azure_storage_account_name}.blob.core.windows.net" - container_client = ContainerClient(azure_storage_account_url, azure_storage_container_name, credential=azure_storage_key) - credentials = ApiKeyCredentials(in_headers={"Training-key": custom_vision_training_key}) - trainer = CustomVisionTrainingClient(custom_vision_endpoint, credentials) - - coco_operator = CocoOperator() - for tag in trainer.get_tags(custom_vision_project_id): - coco_operator.add_category(tag.name) - - skip = 0 - batch_id = 0 - project_name = trainer.get_project(custom_vision_project_id).name - log_project_info(trainer, custom_vision_project_id) - sub_folder = f'{project_name}_{custom_vision_project_id}' - with multiprocessing.Pool(n_process) as pool: - while True: - images: List[Image] = trainer.get_images(project_id=custom_vision_project_id, skip=skip) - if not images: - break - urls = copy_images_with_retry(pool, container_client, sub_folder, images, batch_id) - for i, image in enumerate(images): - coco_operator.add_image(image.width, image.height, urls[i], get_file_name(sub_folder, image.id)) - image_tags: List[ImageTag] = image.tags - image_regions: List[ImageRegion] = image.regions - if image_regions: - for img_region in image_regions: - coco_operator.add_annotation(coco_operator.num_imges, img_region.tag_name, [img_region.left, img_region.top, img_region.width, img_region.height]) - elif image_tags: - for img_tag in image_tags: - coco_operator.add_annotation(coco_operator.num_imges, img_tag.tag_name) - - skip += len(images) - batch_id += 1 - - coco_json_file_name = 'train.json' - local_json = pathlib.Path(coco_json_file_name) - local_json.write_text(coco_operator.to_json(), encoding='utf-8') - coco_json_blob_client: BlobClient = container_client.get_blob_client(f'{sub_folder}/{coco_json_file_name}') - if coco_json_blob_client.exists(): - logging.warning(f'coco json file exists in blob. Skipped uploading. If existing one is outdated, please manually upload your new coco json from ./train.json to {coco_json_blob_client.url}') - else: - coco_json_blob_client.upload_blob(local_json.read_bytes()) - logging.info(f'coco file train.json uploaded to {coco_json_blob_client.url}.') - - -def parse_args(): - parser = argparse.ArgumentParser('Export Custom Vision workspace data to blob storage.') - - parser.add_argument('--custom_vision_project_id', '-p', type=str, required=True, help='Custom Vision Project Id.') - parser.add_argument('--custom_vision_training_key', '-k', type=str, required=True, help='Custom Vision training key.') - parser.add_argument('--custom_vision_endpoint', '-e', type=str, required=True, help='Custom Vision endpoint.') - - parser.add_argument('--azure_storage_account_name', '-a', type=str, required=True, help='Azure storage account name.') - parser.add_argument('--azure_storage_account_key', '-t', type=str, required=True, help='Azure storage account key.') - parser.add_argument('--azure_storage_container_name', '-c', type=str, required=True, help='Azure storage container name.') - - parser.add_argument('--n_process', '-n', type=int, required=False, default=8, help='Number of processes used in exporting data.') - - return parser.parse_args() - - -def main(): - args = parse_args() - - export_data(args.azure_storage_account_name, args.azure_storage_account_key, args.azure_storage_container_name, - args.custom_vision_endpoint, args.custom_vision_training_key, args.custom_vision_project_id, args.n_process) - - -if __name__ == '__main__': - main() -``` - -## Run the script - -Run the script using the `python` command. - -```console -python export-cvs-data-to-coco.py -p -k -e -a -t -c -``` - -You need to fill in the correct parameter values. You need the following information: - -- The project ID of your Custom Vision project -- Your Custom Vision training key -- Your Custom Vision endpoint URL -- The name of the Azure Storage account you want to use with your new custom model project -- The key for that storage account -- The name of the container you want to use in that storage account - ---- - -## Use COCO file in a new project - -The script generates a COCO file and uploads it to the blob storage location you specified. You can now import it to your Model Customization project. See [Create and train a custom model](model-customization.md) and go to the section on selecting/importing a COCO file—you can follow the guide from there to the end. - -## Next steps - -* [Create and train a custom model](model-customization.md) diff --git a/articles/ai-services/computer-vision/how-to/model-customization.md b/articles/ai-services/computer-vision/how-to/model-customization.md index dbc914c2c79..7dfdca72104 100644 --- a/articles/ai-services/computer-vision/how-to/model-customization.md +++ b/articles/ai-services/computer-vision/how-to/model-customization.md @@ -14,6 +14,8 @@ ms.custom: devx-track-python # Create a custom Image Analysis model (preview) +[!INCLUDE [model-customization-deprecation](../includes/model-customization-deprecation.md)] + Image Analysis 4.0 allows you to train a custom model using your own training images. By manually labeling your images, you can train a model to apply custom tags to the images (image classification) or detect custom objects (object detection). Image Analysis 4.0 models are especially effective at few-shot learning, so you can get accurate models with less training data. This guide shows you how to create and train a custom image classification model. The few differences between training an image classification model and object detection model are noted. @@ -350,7 +352,7 @@ The prediction results appear in the right column. ## Prepare training data -The first thing you need to do is create a COCO file from your training data. You can create a COCO file by converting an old Custom Vision project using the [migration script](migrate-from-custom-vision.md). Or, you can create a COCO file from scratch using some other labeling tool. See the following specification: +The first thing you need to do is create a COCO file from your training data. See the following specification: [!INCLUDE [coco-files](../includes/coco-files.md)] diff --git a/articles/ai-services/computer-vision/how-to/shelf-analyze.md b/articles/ai-services/computer-vision/how-to/shelf-analyze.md index eb762d7766d..25a009d7abd 100644 --- a/articles/ai-services/computer-vision/how-to/shelf-analyze.md +++ b/articles/ai-services/computer-vision/how-to/shelf-analyze.md @@ -13,6 +13,8 @@ ms.custom: build-2023, build-2023-dataai # Shelf Product Recognition (preview): Analyze shelf images using pretrained model +[!INCLUDE [model-customization-deprecation](../includes/model-customization-deprecation.md)] + The fastest way to start using Product Recognition is to use the built-in pretrained AI models. With the Product Recognition API, you can upload a shelf image and get the locations of products and gaps. :::image type="content" source="../media/shelf/shelf-analysis-pretrained.png" alt-text="Photo of a retail shelf with products and gaps highlighted with rectangles."::: diff --git a/articles/ai-services/computer-vision/how-to/shelf-model-customization.md b/articles/ai-services/computer-vision/how-to/shelf-model-customization.md index 82f0badb785..27c0f33835a 100644 --- a/articles/ai-services/computer-vision/how-to/shelf-model-customization.md +++ b/articles/ai-services/computer-vision/how-to/shelf-model-customization.md @@ -14,6 +14,8 @@ ms.author: pafarley # Shelf product recognition - custom model (preview) +[!INCLUDE [model-customization-deprecation](../includes/model-customization-deprecation.md)] + You can train a custom model to recognize specific retail products for use in a Product Recognition scenario. The out-of-box [Analyze](shelf-analyze.md) operation doesn't differentiate between products, but you can build this capability into your app through custom labeling and training. :::image type="content" source="../media/shelf/shelf-analysis-custom.png" alt-text="Photo of a retail shelf with product names and gaps highlighted with rectangles."::: diff --git a/articles/ai-services/computer-vision/how-to/shelf-modify-images.md b/articles/ai-services/computer-vision/how-to/shelf-modify-images.md index a8dfda9988e..23392bd3194 100644 --- a/articles/ai-services/computer-vision/how-to/shelf-modify-images.md +++ b/articles/ai-services/computer-vision/how-to/shelf-modify-images.md @@ -13,6 +13,8 @@ ms.custom: build-2023 # Shelf image composition (preview) +[!INCLUDE [model-customization-deprecation](../includes/model-customization-deprecation.md)] + Part of the Product Recognition workflow involves fixing and modifying the input images so the service can perform correctly. This guide shows you how to use the **Stitching API** to combine multiple images of the same physical shelf: this gives you a composite image of the entire retail shelf, even if it's only viewed partially by multiple different cameras. diff --git a/articles/ai-services/computer-vision/includes/model-customization-deprecation.md b/articles/ai-services/computer-vision/includes/model-customization-deprecation.md new file mode 100644 index 00000000000..32326f442b1 --- /dev/null +++ b/articles/ai-services/computer-vision/includes/model-customization-deprecation.md @@ -0,0 +1,16 @@ +--- +title: Model customization deprecation notice +titleSuffix: Azure AI services +#services: cognitive-services +author: PatrickFarley +manager: nitinme +ms.service: azure-ai-vision +ms.topic: include +ms.date: 09/10/2024 +ms.author: pafarley +--- + +> [!IMPORTANT] +> This feature is now deprecated. On January 10, 2025, the Azure AI Vision Product Recognition and model customization features will be retired: after this date, API calls to these services will fail. +> +> To maintain a smooth operation of your models, transition to [Azure AI Custom Vision](/azure/ai-services/Custom-Vision-Service/overview), which is now generally available. Custom Vision offers similar functionality to these retiring features. \ No newline at end of file diff --git a/articles/ai-services/computer-vision/overview-image-analysis.md b/articles/ai-services/computer-vision/overview-image-analysis.md index 004dacfd70f..a4d917b92c9 100644 --- a/articles/ai-services/computer-vision/overview-image-analysis.md +++ b/articles/ai-services/computer-vision/overview-image-analysis.md @@ -46,7 +46,7 @@ You can analyze images to provide insights about their visual features and chara | Name | Description | Concept page | |---|---|---| -|**Model customization** (v4.0 preview only)|You can create and train custom models to do image classification or object detection. Bring your own images, label them with custom tags, and Image Analysis trains a model customized for your use case.|[Model customization](./concept-model-customization.md)| +|**Model customization** (v4.0 preview only) (deprecated) |You can create and train custom models to do image classification or object detection. Bring your own images, label them with custom tags, and Image Analysis trains a model customized for your use case.|[Model customization](./concept-model-customization.md)| |**Read text from images** (v4.0 only)| Version 4.0 preview of Image Analysis offers the ability to extract readable text from images. Compared with the async Computer Vision 3.2 Read API, the new version offers the familiar Read OCR engine in a unified performance-enhanced synchronous API that makes it easy to get OCR along with other insights in a single API call. |[OCR for images](concept-ocr.md)| |**Detect people in images** (v4.0 only)|Version 4.0 of Image Analysis offers the ability to detect people appearing in images. The bounding box coordinates of each detected person are returned, along with a confidence score. |[People detection](concept-people-detection.md)| |**Generate image captions** | Generate a caption of an image in human-readable language, using complete sentences. Computer Vision's algorithms generate captions based on the objects identified in the image.

The version 4.0 image captioning model is a more advanced implementation and works with a wider range of input images. It's only available in the certain geographic regions. See [Region availability](#region-availability).

Version 4.0 also lets you use dense captioning, which generates detailed captions for individual objects that are found in the image. The API returns the bounding box coordinates (in pixels) of each object found in the image, plus a caption. You can use this functionality to generate descriptions of separate parts of an image.

:::image type="content" source="Images/description.png" alt-text="Photo of cows with a simple description on the right.":::| [Generate image captions (v3.2)](concept-describing-images.md)
[(v4.0)](concept-describe-images-40.md)| @@ -64,7 +64,9 @@ You can analyze images to provide insights about their visual features and chara > [!TIP] > You can leverage the Read text and Object detection features of Image Analysis through the [Azure OpenAI](/azure/ai-services/openai/overview) service. The **GPT-4 Turbo with Vision** model lets you chat with an AI assistant that can analyze the images you share, and the Vision Enhancement option uses Image Analysis to give the AI assistant more details about the image (readable text and object locations). For more information, see the [GPT-4 Turbo with Vision quickstart](/azure/ai-services/openai/gpt-v-quickstart). -## Product Recognition (v4.0 preview only) +## Product Recognition (v4.0 preview only) (deprecated) + +[!INCLUDE [model-customization-deprecation](includes/model-customization-deprecation.md)] The Product Recognition APIs let you analyze photos of shelves in a retail store. You can detect the presence or absence of products and get their bounding box coordinates. Use it in combination with model customization to train a model to identify your specific products. You can also compare Product Recognition results to your store's planogram document. diff --git a/articles/ai-services/computer-vision/toc.yml b/articles/ai-services/computer-vision/toc.yml index 2c9ff6072d6..664bd5a6c85 100644 --- a/articles/ai-services/computer-vision/toc.yml +++ b/articles/ai-services/computer-vision/toc.yml @@ -156,8 +156,6 @@ items: items: - name: Create a custom model (preview) href: how-to/model-customization.md - - name: Migrate a Custom Vision project to Image Analysis (preview) - href: how-to/migrate-from-custom-vision.md - name: Verify a custom model COCO file (preview) href: how-to/coco-verification.md - name: Product Recognition (preview) diff --git a/articles/ai-services/computer-vision/whats-new.md b/articles/ai-services/computer-vision/whats-new.md index ef204f42aa6..2dd59077b78 100644 --- a/articles/ai-services/computer-vision/whats-new.md +++ b/articles/ai-services/computer-vision/whats-new.md @@ -18,6 +18,14 @@ ms.author: pafarley Learn what's new in Azure AI Vision. Check this page to stay up to date with new features, enhancements, fixes, and documentation updates. +## September 2024 + +### Model customization and Product Recognition deprecation + +On January 10, 2025, the Azure AI Vision Product Recognition and model customization features will be retired: after this date, API calls to these services will fail. + +To maintain a smooth operation of your models, transition to [Azure AI Custom Vision](/azure/ai-services/Custom-Vision-Service/overview), which is now generally available. Custom Vision offers similar functionality to these retiring features. + ## August 2024 ### New detectable Face attributes diff --git a/articles/ai-services/disable-local-auth.md b/articles/ai-services/disable-local-auth.md index d8885cab60a..99719cb8072 100644 --- a/articles/ai-services/disable-local-auth.md +++ b/articles/ai-services/disable-local-auth.md @@ -15,7 +15,7 @@ ms.author: pafarley Azure AI Services provides Microsoft Entra authentication support for all resources. This feature provides you with seamless integration when you require centralized control and management of identities and resource credentials. Organizations can disable local authentication methods and enforce Microsoft Entra authentication instead. -You can disable local authentication using the Azure policy [Cognitive Services accounts should have local authentication methods disabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc). Set it at the subscription level or resource group level to enforce the policy for a group of services. +You can disable local authentication using the Azure policy **Azure AI Services resources should have key access disabled (disable local authentication)**. Set it at the subscription level or resource group level to enforce the policy for a group of services. If you're creating an account using Bicep / ARM template, you can set the property `disableLocalAuth` to `true` to disable local authentication. For more information, see [Microsoft.CognitiveServices accounts - Bicep, ARM template, & Terraform](/azure/templates/microsoft.cognitiveservices/accounts) diff --git a/articles/ai-services/document-intelligence/concept-custom-generative.md b/articles/ai-services/document-intelligence/concept-custom-generative.md index fede34e195d..18ca8d246b3 100644 --- a/articles/ai-services/document-intelligence/concept-custom-generative.md +++ b/articles/ai-services/document-intelligence/concept-custom-generative.md @@ -18,11 +18,7 @@ monikerRange: '>=doc-intel-4.0.0' > * Document Intelligence public preview releases provide early access to features that are in active development. Features, approaches, and processes may change, prior to General Availability (GA), based on user feedback. > * The public preview version of Document Intelligence client libraries default to REST API version [**2024-07-31-preview**](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-07-31-preview&preserve-view=true) and is currently only available in the following Azure regions. > * **East US** -> * **West US2** -> * **West Europe** > * **North Central US** -> -> * **The new custom generative model in AI Studio is only available in the North Central US region**: The document field extraction (custom generative AI) model utilizes generative AI to extract user-specified fields from documents across a wide variety of visual templates. The custom generative AI model combines the power of document understanding with Large Language Models (LLMs) and the rigor and schema from custom extraction capabilities to create a model with high accuracy in minutes. With this generative model type, you can start with a single document and go through the schema addition and model creation process with minimal labeling. The custom generative model allows developers and enterprises to easily automate data extraction workflows with greater accuracy and speed for any type of document. The custom generative AI model excels in extracting simple fields from documents without labeled samples. However, providing a few labeled samples improves the extraction accuracy for complex fields and user-defined fields like tables. You can use the [REST API](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-07-31-preview&preserve-view=true) or client libraries to submit a document for analysis with a model build and use the custom generative process. @@ -76,7 +72,7 @@ Field extraction custom generative model `2024-07-31-preview` version supports t ## Region support -Field extraction custom generative model `2024-07-31-preview` version is only available in `North Central US`.   +Field extraction custom generative model `2024-07-31-preview` version is only available in 'East US' and `North Central US`.   ## Input requirements diff --git a/articles/ai-services/document-intelligence/how-to-guides/build-train-custom-generative-model.md b/articles/ai-services/document-intelligence/how-to-guides/build-train-custom-generative-model.md index 9d803a582ed..39a21234662 100644 --- a/articles/ai-services/document-intelligence/how-to-guides/build-train-custom-generative-model.md +++ b/articles/ai-services/document-intelligence/how-to-guides/build-train-custom-generative-model.md @@ -65,7 +65,7 @@ Once you have your Azure blob storage containers, upload your training data to y ## Azure AI Studio -1. Navigate to the [Azure AI Studio](https://ai.azure.com/?tid=72f988bf-86f1-41af-91ab-2d7cd011db47). The first time you use the Studio, you need to [initialize your subscription and create a hub](../../../ai-studio/how-to/create-azure-ai-resource.md) before creating a project. Custom generative models are only available in North Central US in preview. Ensure your resource group is set to North Central US during hub creation. +1. Navigate to the [Azure AI Studio](https://ai.azure.com/?tid=72f988bf-86f1-41af-91ab-2d7cd011db47). The first time you use the Studio, you need to [initialize your subscription and create a hub](../../../ai-studio/how-to/create-azure-ai-resource.md) before creating a project. Custom generative models are only available in East US and North Central US in preview. Ensure your resource group is set to East US or North Central US during hub creation. 1. Select the Vision + Document tile. diff --git a/articles/ai-studio/.openpublishing.redirection.ai-studio.json b/articles/ai-studio/.openpublishing.redirection.ai-studio.json index d95b9a27ea5..7810bd05654 100644 --- a/articles/ai-studio/.openpublishing.redirection.ai-studio.json +++ b/articles/ai-studio/.openpublishing.redirection.ai-studio.json @@ -42,12 +42,12 @@ }, { "source_path_from_root": "/articles/ai-studio/tutorials/deploy-copilot-sdk.md", - "redirect_url": "/azure/ai-studio/tutorials/copilot-sdk-build-rag", + "redirect_url": "/azure/ai-studio/tutorials/copilot-sdk-create-resources", "redirect_document_id": false }, { "source_path_from_root": "/articles/ai-studio/tutorials/deploy-copilot-ai-studio.md", - "redirect_url": "/azure/ai-studio/tutorials/copilot-sdk-build-rag", + "redirect_url": "/azure/ai-studio/tutorials/copilot-sdk-create-resources", "redirect_document_id": false }, { diff --git a/articles/ai-studio/index.yml b/articles/ai-studio/index.yml index 9279c0e670a..5d363a9363a 100644 --- a/articles/ai-studio/index.yml +++ b/articles/ai-studio/index.yml @@ -67,8 +67,8 @@ landingContent: url: how-to/develop/sdk-overview.md - text: Work with AI Studio projects in VS Code url: how-to/develop/vscode.md - - text: Build your copilot using the prompt flow SDK - url: tutorials/copilot-sdk-build-rag.md + - text: Build a custom chat app with the prompt flow SDK + url: tutorials/copilot-sdk-create-resources.md - linkListType: concept links: - text: Connections for flows and indexing diff --git a/articles/ai-studio/quickstarts/get-started-code.md b/articles/ai-studio/quickstarts/get-started-code.md index 3943ebcfecc..7caea4bf73c 100644 --- a/articles/ai-studio/quickstarts/get-started-code.md +++ b/articles/ai-studio/quickstarts/get-started-code.md @@ -277,4 +277,4 @@ For more information on how to use prompt flow evaluators, including how to make ## Next step > [!div class="nextstepaction"] -> [Add data and use retrieval augmented generation (RAG) to build a copilot](../tutorials/copilot-sdk-build-rag.md) +> [Add data and use retrieval augmented generation (RAG) to build a custom chat app](../tutorials/copilot-sdk-create-resources.md) diff --git a/articles/ai-studio/tutorials/deploy-chat-web-app.md b/articles/ai-studio/tutorials/deploy-chat-web-app.md index 7873b0843df..373c091bb9f 100644 --- a/articles/ai-studio/tutorials/deploy-chat-web-app.md +++ b/articles/ai-studio/tutorials/deploy-chat-web-app.md @@ -157,4 +157,4 @@ If you delete the Cosmos DB resource but keep the chat history option enabled on ## Related content - [Get started building a chat app using the prompt flow SDK](../quickstarts/get-started-code.md) -- [Build your own copilot with the prompt flow SDK.](./copilot-sdk-build-rag.md). +- [Build a custom chat app with the prompt flow SDK.](./copilot-sdk-create-resources.md). diff --git a/articles/machine-learning/how-to-manage-optimize-cost.md b/articles/machine-learning/how-to-manage-optimize-cost.md index 23a1b1e23a9..aa9b04a3a95 100644 --- a/articles/machine-learning/how-to-manage-optimize-cost.md +++ b/articles/machine-learning/how-to-manage-optimize-cost.md @@ -1,7 +1,7 @@ --- title: Manage and optimize costs titleSuffix: Azure Machine Learning -description: Learn tips to optimize your cost when building machine learning models in Azure Machine Learning +description: Use these tips to optimize your cost when you build machine learning models in Azure Machine Learning. ms.reviewer: None author: ssalgadodev ms.author: ssalgado @@ -9,15 +9,17 @@ ms.custom: subject-cost-optimization ms.service: azure-machine-learning ms.subservice: core ms.topic: how-to -ms.date: 08/06/2024 +ms.date: 09/05/2024 +#customer intent: As a data scientist or engineer, I want to optimize my cost for training learning modules. --- # Manage and optimize Azure Machine Learning costs -This article shows you how to manage and optimize costs when training and deploying machine learning models to Azure Machine Learning. +This article shows you how to manage and optimize costs when you train and deploy machine learning models to Azure Machine Learning. Use the following tips to help you manage and optimize your compute resource costs. +- Use Azure Machine Learning compute cluster - Configure your training clusters for autoscaling - Configure your managed online endpoints for autoscaling - Set quotas on your subscription and workspaces @@ -25,104 +27,123 @@ Use the following tips to help you manage and optimize your compute resource cos - Use low-priority virtual machines (VM) - Schedule compute instances to shut down and start up automatically - Use an Azure Reserved VM Instance -- Train locally - Parallelize training - Set data retention and deletion policies - Deploy resources to the same region -- Delete failed deployments if computes are created for them +- Delete failed deployments -For information on planning and monitoring costs, see the [plan to manage costs for Azure Machine Learning](concept-plan-manage-cost.md) guide. +For information on planning and monitoring costs, see [Plan to manage costs for Azure Machine Learning](concept-plan-manage-cost.md). > [!IMPORTANT] > Items marked (preview) in this article are currently in public preview. -> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. +> The preview version is provided without a service level agreement. We don't recommend preview versions for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). -## Use Azure Machine Learning compute cluster (AmlCompute) +## Use the Azure Machine Learning compute cluster -With constantly changing data, you need fast and streamlined model training and retraining to maintain accurate models. However, continuous training comes at a cost, especially for deep learning models on GPUs. +With constantly changing data, you need fast and streamlined model training and retraining to maintain accurate models. However, continuous training comes at a cost, especially for deep learning models on GPUs. -Azure Machine Learning users can use the managed Azure Machine Learning compute cluster, also called AmlCompute. AmlCompute supports various GPU and CPU options. The AmlCompute is internally hosted on behalf of your subscription by Azure Machine Learning. It provides the same enterprise grade security, compliance, and governance at Azure IaaS cloud scale. +Azure Machine Learning users can use the managed Azure Machine Learning compute cluster, also called *AmlCompute*. AmlCompute supports various GPU and CPU options. The AmlCompute is internally hosted on behalf of your subscription by Azure Machine Learning. It provides the same enterprise grade security, compliance, and governance at Azure IaaS cloud scale. -Because these compute pools are inside of Azure's IaaS infrastructure, you can deploy, scale, and manage your training with the same security and compliance requirements as the rest of your infrastructure. These deployments occur in your subscription and obey your governance rules. Learn more about [Azure Machine Learning compute](how-to-create-attach-compute-cluster.md). +Because these compute pools are inside of Azure's IaaS infrastructure, you can deploy, scale, and manage your training with the same security and compliance requirements as the rest of your infrastructure. These deployments occur in your subscription and obey your governance rules. For more information, see [Plan to manage costs for Azure Machine Learning](concept-plan-manage-cost.md). ## Configure training clusters for autoscaling Autoscaling clusters based on the requirements of your workload helps reduce your costs so you only use what you need. -AmlCompute clusters are designed to scale dynamically based on your workload. The cluster can be scaled up to the maximum number of nodes you configure. As each job completes, the cluster releases nodes and scale to your configured minimum node count. +AmlCompute clusters are designed to scale dynamically based on your workload. The cluster can be scaled up to the maximum number of nodes you configure. As each job finishes, the cluster releases nodes and scales to your configured minimum node count. [!INCLUDE [min-nodes-note](includes/machine-learning-min-nodes.md)] You can also configure the amount of time the node is idle before scale down. By default, idle time before scale down is set to 120 seconds. -+ If you perform less iterative experimentation, reduce this time to save costs. -+ If you perform highly iterative dev/test experimentation, you might need to increase the time so you aren't paying for constant scaling up and down after each change to your training script or environment. +- If you perform less iterative experimentation, reduce this time to save costs. +- If you perform highly iterative dev/test experimentation, you might need to increase the time so that you don't pay for constant scaling up and down after each change to your training script or environment. -AmlCompute clusters can be configured for your changing workload requirements in Azure portal, using the [AmlCompute SDK class](/python/api/azure-ai-ml/azure.ai.ml.entities.amlcompute), [AmlCompute CLI](/cli/azure/ml/compute#az-ml-compute-create), with the [REST APIs](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/machinelearningservices/resource-manager/Microsoft.MachineLearningServices/stable). +You can configure AmlCompute clusters for your changing workload requirements by using: -## Configure your managed online endpoints for autoscaling +- The Azure portal +- The [AmlCompute SDK class](/python/api/azure-ai-ml/azure.ai.ml.entities.amlcompute) +- [AmlCompute CLI](/cli/azure/ml/compute#az-ml-compute-create) +- [REST APIs](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/machinelearningservices/resource-manager/Microsoft.MachineLearningServices/stable). -Autoscale automatically runs the right amount of resources to handle the load on your application. [Managed online endpoints](concept-endpoints-online.md) supports autoscaling through integration with the Azure Monitor autoscale feature. +## Configure managed online endpoints for autoscaling -Azure Monitor autoscaling supports a rich set of rules. You can configure metrics-based scaling (for instance, CPU utilization >70%), schedule-based scaling (for example, scaling rules for peak business hours), or a combination. For more information, see [Autoscale online endpoints](how-to-autoscale-endpoints.md). +Autoscale automatically runs the right amount of resources to handle the load on your application. Managed online endpoints support autoscaling through integration with the Azure Monitor autoscale feature. For more information, see [Online endpoints and deployments for real-time inference](concept-endpoints-online.md). + +Azure Monitor autoscaling supports a rich set of rules: + +- Metrics-based scaling, for instance, CPU utilization >70% +- Schedule-based scaling, for example, scaling rules for peak business hours +- A combination of the two + +For more information, see [Autoscale online endpoints](how-to-autoscale-endpoints.md). ## Set quotas on resources -AmlCompute comes with a [quota (or limit) configuration](how-to-manage-quotas.md#azure-machine-learning-compute). This quota is by VM family (for example, Dv2 series, NCv3 series) and varies by region for each subscription. Subscriptions start with small defaults to get you going, but use this setting to control the amount of Amlcompute resources available to be spun up in your subscription. +AmlCompute comes with a quota, or limit, configuration. This quota is by VM family, for example, Dv2 series or NCv3 series. The quota varies by region for each subscription. Subscriptions start with small defaults. Use this setting to control the amount of AmlCompute resources available to be spun up in your subscription. For more information, see [Azure Machine Learning Compute](how-to-manage-quotas.md#azure-machine-learning-compute). + +Also, you can configure workspace level quota by VM family for each workspace within a subscription. This approach gives you more granular control on the costs that each workspace might incur and restricts certain VM families. For more information, see [Workspace-level quotas](how-to-manage-quotas.md#workspace-level-quotas). -Also configure [workspace level quota by VM family](how-to-manage-quotas.md#workspace-level-quotas), for each workspace within a subscription. Doing so allows you to have more granular control on the costs that each workspace might potentially incur and restrict certain VM families. +To set quotas at the workspace level: -To set quotas at the workspace level, start in the [Azure portal](https://portal.azure.com). Select any workspace in your subscription, and select **Usages + quotas** in the left pane. Then select the **Configure quotas** tab to view the quotas. You need privileges at the subscription scope to set the quota, since it's a setting that affects multiple workspaces. +1. Open the [Azure portal](https://portal.azure.com) and then select any workspace in your subscription. +1. Select **Support + Troubleshooting** > **Usage + quotas** in the workspace menu. +1. Select **View quota** to view quotas in Azure Machine Learning studio. +1. From this page, you can find your subscription and region in order to set quotas. -## Set job autotermination policies + Because this setting affects multiple workspaces, you need privileges at the subscription scope to set the quota. -In some cases, you should configure your training runs to limit their duration or terminate them early. For example, when you're using Azure Machine Learning's built-in hyperparameter tuning or automated machine learning. +## Set job termination policies + +In some cases, you should configure your training runs to limit their duration or terminate them early. For example, when you use Azure Machine Learning's built-in hyperparameter tuning or automated machine learning. Here are a few options that you have: -* Define a parameter called `max_run_duration_seconds` in your RunConfiguration to control the maximum duration a run can extend to on the compute you choose (either local or remote cloud compute). -* For [hyperparameter tuning](how-to-tune-hyperparameters.md#early-termination), define an early termination policy from a Bandit policy, a Median stopping policy, or a Truncation selection policy. To further control hyperparameter sweeps, use parameters such as `max_total_runs` or `max_duration_minutes`. -* For [automated machine learning](how-to-configure-auto-train.md#exit-criteria), set similar termination policies using the `enable_early_stopping` flag. Also use properties such as `iteration_timeout_minutes` and `experiment_timeout_minutes` to control the maximum duration of a job or for the entire experiment. -## Use low-priority VMs +- Define a parameter called `max_run_duration_seconds` in your RunConfiguration to control the maximum duration a run can extend to on the compute you choose, either local or remote cloud compute. +- For *hyperparameter tuning*, define an early termination policy from a Bandit policy, a Median stopping policy, or a Truncation selection policy. To further control hyperparameter sweeps, use parameters such as `max_total_runs` or `max_duration_minutes`. For more information, see [Specify early termination policy](how-to-tune-hyperparameters.md#early-termination). +- For automated machine learning, set similar termination policies using the `enable_early_stopping` flag. You can also use properties such as `iteration_timeout_minutes` and `experiment_timeout_minutes` to control the maximum duration of a job or for the entire experiment. For more information, see [Exit criteria](how-to-configure-auto-train.md#exit-criteria). + +## Use low-priority virtual machines -Azure allows you to use excess unutilized capacity as Low-Priority VMs across virtual machine scale sets, Batch, and the Machine Learning service. These allocations are pre-emptible but come at a reduced price compared to dedicated VMs. In general, we recommend using Low-Priority VMs for Batch workloads. You should also use them where interruptions are recoverable either through resubmits (for Batch Inferencing) or through restarts (for deep learning training with checkpointing). +Azure allows you to use excess unused capacity as Low-Priority VMs across virtual machine scale sets, Batch, and the Machine Learning service. These allocations are preemptible but come at a reduced price compared to dedicated VMs. In general, we recommend that you use Low-Priority VMs for Batch workloads. You should also use them where interruptions are recoverable either through resubmits for Batch Inferencing or through restarts for deep learning training with checkpointing. -Low-Priority VMs have a single quota separate from the dedicated quota value, which is by VM family. Learn [more about AmlCompute quotas](how-to-manage-quotas.md). +Low-Priority VMs have a single quota separate from the dedicated quota value, which is by VM family. For more information about more about AmlCompute quotas, see [Manage and increase quotas ](how-to-manage-quotas.md). - Low-Priority VMs don't work for compute instances, since they need to support interactive notebook experiences. +Low-Priority VMs don't work for compute instances, since they need to support interactive notebook experiences. ## Schedule compute instances When you create a [compute instance](concept-compute-instance.md), the VM stays on so it's available for your work. -* [Enable idle shutdown (preview)](how-to-create-compute-instance.md#configure-idle-shutdown) to save on cost when the VM has been idle for a specified time period. -* Or [set up a schedule](how-to-create-compute-instance.md#schedule-automatic-start-and-stop) to automatically start and stop the compute instance (preview) to save cost when you aren't planning to use it. + +- Enable idle shutdown (preview) to save on cost when the VM is idle for a specified time period. See [Configure idle shutdown](how-to-create-compute-instance.md#configure-idle-shutdown). +- Set up a schedule to automatically start and stop the compute instance (preview) when not in use to save cost. See [Schedule automatic start and stop](how-to-create-compute-instance.md#schedule-automatic-start-and-stop). ## Use reserved instances Another way to save money on compute resources is Azure Reserved VM Instance. With this offering, you commit to one-year or three-year terms. These discounts range up to 72% of the pay-as-you-go prices and are applied directly to your monthly Azure bill. -Azure Machine Learning Compute supports reserved instances inherently. If you purchase a one-year or three-year reserved instance, we'll automatically apply discount against your Azure Machine Learning managed compute. +Azure Machine Learning Compute supports reserved instances inherently. If you purchase a one-year or three-year reserved instance, we automatically apply discount against your Azure Machine Learning managed compute. ## Parallelize training -One of the key methods of optimizing cost and performance is by parallelizing the workload with the help of a parallel component in Azure Machine Learning. A parallel component allows you to use many smaller nodes to execute the task in parallel, hence allowing you to scale horizontally. There's an overhead for parallelization. Depending on the workload and the degree of parallelism that can be achieved, this may or may not be an option. For more details, follow this link for [ParallelComponent](/python/api/azure-ai-ml/azure.ai.ml.entities.parallelcomponent) documentation. +One of the key methods to optimize cost and performance is to parallelize the workload with the help of a parallel component in Azure Machine Learning. A parallel component allows you to use many smaller nodes to run the task in parallel, which allows you to scale horizontally. There's an overhead for parallelization. Depending on the workload and the degree of parallelism that can be achieved, this approach might be an option. For more information, see [ParallelComponent Class](/python/api/azure-ai-ml/azure.ai.ml.entities.parallelcomponent). -## Set data retention & deletion policies +## Set data retention and deletion policies -Every time a pipeline is executed, intermediate datasets are generated at each step. Over time, these intermediate datasets take up space in your storage account. Consider setting up policies to manage your data throughout its lifecycle to archive and delete your datasets. For more information, see [optimize costs by automating Azure Blob Storage access tiers](/azure/storage/blobs/lifecycle-management-overview). +Every time a pipeline runs, intermediate datasets are generated at each step. Over time, these intermediate datasets take up space in your storage account. Consider setting up policies to manage your data throughout its lifecycle to archive and delete your datasets. For more information, see [Optimize costs by automatically managing the data lifecycle](/azure/storage/blobs/lifecycle-management-overview). ## Deploy resources to the same region -Computes located in different regions may experience network latency and increased data transfer costs. Azure network costs are incurred from outbound bandwidth from Azure data centers. To help reduce network costs, deploy all your resources in the region. Provisioning your Azure Machine Learning workspace and dependent resources in the same region as your data can help lower cost and improve performance. +Computes located in different regions can experience network latency and increased data transfer costs. Azure network costs are incurred from outbound bandwidth from Azure data centers. To help reduce network costs, deploy all your resources in the region. Provisioning your Azure Machine Learning workspace and dependent resources in the same region as your data can help lower cost and improve performance. -For hybrid cloud scenarios like those using ExpressRoute, it can sometimes be more cost effective to move all resources to Azure to optimize network costs and latency. +For hybrid cloud scenarios like those that use Azure ExpressRoute, it can sometimes be more cost effective to move all resources to Azure to optimize network costs and latency. -## Delete failed deployments if computes are created for them +## Delete failed deployments -Managed online endpoint uses VMs for the deployments. If you submitted request to create an online deployment and it failed, it may have passed the stage when compute is created. In that case, the failed deployment would incur charges. If you finished debugging or investigation for the failure, you may delete the failed deployments to save the cost. +Managed online endpoints use VMs for the deployments. If you submitted request to create an online deployment and it failed, the request might have passed the stage when compute is created. In that case, the failed deployment would incur charges. When you finish debugging or investigation for the failure, delete the failed deployments to save the cost. -## Next steps +## Related content - [Plan to manage costs for Azure Machine Learning](concept-plan-manage-cost.md) - [Manage budgets, costs, and quota for Azure Machine Learning at organizational scale](/azure/cloud-adoption-framework/ready/azure-best-practices/optimize-ai-machine-learning-cost) diff --git a/articles/machine-learning/prompt-flow/media/faq/datastore-update-rest.png b/articles/machine-learning/prompt-flow/media/faq/datastore-update-rest.png deleted file mode 100644 index 365af5d100d..00000000000 Binary files a/articles/machine-learning/prompt-flow/media/faq/datastore-update-rest.png and /dev/null differ diff --git a/articles/machine-learning/prompt-flow/media/faq/fileshare-datastore-update-auth-type.png b/articles/machine-learning/prompt-flow/media/faq/fileshare-datastore-update-auth-type.png deleted file mode 100644 index 27c9a1db8b8..00000000000 Binary files a/articles/machine-learning/prompt-flow/media/faq/fileshare-datastore-update-auth-type.png and /dev/null differ