Skip to content

Commit

Permalink
Merge pull request #154 from MicrosoftDocs/main
Browse files Browse the repository at this point in the history
9/5/2024 PM Publish
  • Loading branch information
Taojunshen authored Sep 5, 2024
2 parents 45a5567 + d81c7b9 commit 214d392
Show file tree
Hide file tree
Showing 11 changed files with 109 additions and 107 deletions.
77 changes: 35 additions & 42 deletions articles/ai-services/index.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ metadata:
ms.author: eur
manager: nitinme
ms.custom: ignite-2023
ms.date: 8/20/2024
ms.date: 9/5/2024
highlightedContent:
# itemType: architecture | concept | deploy | download | get-started | how-to-guide | learn | overview | quickstart | reference | tutorial | whats-new
items:
Expand All @@ -23,9 +23,9 @@ highlightedContent:
- title: What is Azure AI Studio?
itemType: overview
url: ../ai-studio/what-is-ai-studio.md
- title: Build your own copilot with Azure AI SDKs
itemType: tutorial
url: ../ai-studio/tutorials/copilot-sdk-build-rag.md
- title: Chat with Azure OpenAI models using your own data
itemType: quickstart
url: ./openai/use-your-data-quickstart.md
- title: Responsible use of AI
itemType: concept
url: responsible-use-of-ai-overview.md
Expand All @@ -42,20 +42,25 @@ productDirectory:
summary: Perform a wide variety of natural language tasks.
url: ./openai/index.yml
# Card
- title: Azure AI Search
imageSrc: ~/reusable-content/ce-skilling/azure/media/ai-services/search.svg
summary: Bring AI-powered cloud search to your mobile and web applications.
url: /azure/search/
# Card
- title: Content Safety
imageSrc: ~/reusable-content/ce-skilling/azure/media/ai-services/content-safety.svg
summary: An AI service that detects unwanted contents
url: ./content-safety/index.yml
# Card
- title: Speech
imageSrc: ~/reusable-content/ce-skilling/azure/media/ai-services/speech.svg
summary: Speech to text, text to speech, translation, and speaker recognition
url: ./speech-service/index.yml
# Card
- title: Language
imageSrc: ~/reusable-content/ce-skilling/azure/media/ai-services/language.svg
summary: Build apps with industry-leading natural language understanding capabilities.
url: ./language-service/index.yml
# Card
- title: Translator
imageSrc: ~/reusable-content/ce-skilling/azure/media/ai-services/translator.svg
summary: Use AI-powered translation technology to translate more than 100 in-use, at-risk, and endangered languages and dialects.
url: ./translator/index.yml
- title: Document Intelligence
imageSrc: ~/reusable-content/ce-skilling/azure/media/ai-services/document-intelligence.svg
summary: Turn documents into intelligent data-driven solutions.
url: ./document-intelligence/index.yml
# Card
- title: Vision
imageSrc: ~/reusable-content/ce-skilling/azure/media/ai-services/vision.svg
Expand All @@ -72,25 +77,15 @@ productDirectory:
summary: Detect and identify people and emotions in images.
url: ./computer-vision/overview-identity.md
# Card
- title: Content Safety
imageSrc: ~/reusable-content/ce-skilling/azure/media/ai-services/content-safety.svg
summary: An AI service that detects unwanted contents
url: ./content-safety/index.yml
# Card
- title: Bot Service
imageSrc: ~/reusable-content/ce-skilling/azure/media/ai-services/bot-services.svg
summary: Create bots and connect them across channels.
url: /composer/
# Card
- title: Document Intelligence
imageSrc: ~/reusable-content/ce-skilling/azure/media/ai-services/document-intelligence.svg
summary: Turn documents into intelligent data-driven solutions.
url: ./document-intelligence/index.yml
- title: Translator
imageSrc: ~/reusable-content/ce-skilling/azure/media/ai-services/translator.svg
summary: Use AI-powered translation technology to translate more than 100 in-use, at-risk, and endangered languages and dialects.
url: ./translator/index.yml
# Card
- title: Azure AI Search
imageSrc: ~/reusable-content/ce-skilling/azure/media/ai-services/search.svg
summary: Bring AI-powered cloud search to your mobile and web applications.
url: /azure/search/
- title: Language
imageSrc: ~/reusable-content/ce-skilling/azure/media/ai-services/language.svg
summary: Build apps with industry-leading natural language understanding capabilities.
url: ./language-service/index.yml
# Card
- title: Video Indexer
imageSrc: ~/reusable-content/ce-skilling/azure/media/ai-services/video-indexer.svg
Expand All @@ -110,34 +105,32 @@ additionalContent:
links:
- text: Azure AI Studio
url: https://ai.azure.com/
- text: Azure OpenAI
- text: Azure OpenAI Studio
url: https://oai.azure.com/
- text: Content Safety
url: https://contentsafety.cognitive.azure.com/
- text: Speech
url: https://speech.microsoft.com/
- text: Language
url: https://language.cognitive.azure.com/
- text: Document Intelligence
url: https://formrecognizer.appliedai.azure.com/
- text: Vision
url: https://portal.vision.cognitive.azure.com/
- text: Custom Vision
url: https://www.customvision.ai/
- text: Document Intelligence
url: https://formrecognizer.appliedai.azure.com/
- text: Content Safety
url: https://contentsafety.cognitive.azure.com/
- text: Custom Translator
url: https://portal.customtranslator.azure.ai/
- text: Azure Machine Learning
url: https://ml.azure.com/
- text: Language
url: https://language.cognitive.azure.com/
- title: Explore more AI resources
links:
- text: Azure AI Studio
url: /azure/ai-studio/
- text: Azure Machine Learning
url: /azure/machine-learning/
- text: Semantic Kernel
url: /semantic-kernel/
- text: AI Builder
url: /ai-builder/
- text: Power Virtual Agents with Azure AI Language
url: /power-virtual-agents/advanced-clu-integration
- text: Windows AI
url: /windows/ai/
- text: GitHub Copilot
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -243,15 +243,15 @@ curl --request POST \

## Address out-of-domain utterances

Customers can use the new recipe version `2024-06-01-preview` if the model has poor AIQ on out-of-domain utterances. An example of this scenario with the default recipe can be like the following example where the model has three intents: `Sports`, `QueryWeather`, and `Alarm`. The test utterances are out-of-domain utterances and the model classifies them as `InDomain` with a relatively high confidence score.
Customers can use the newly updated recipe version `2024-08-01-preview` (previously `2024-06-01-preview`) if the model has poor AIQ on out-of-domain utterances. An example of this scenario with the default recipe can be like the following example where the model has three intents: `Sports`, `QueryWeather`, and `Alarm`. The test utterances are out-of-domain utterances and the model classifies them as `InDomain` with a relatively high confidence score.

| Text | Predicted intent | Confidence score |
|----|----|----|
| "Who built the Eiffel Tower?" | `Sports` | 0.90 |
| "Do I look good to you today?" | `QueryWeather` | 1.00 |
| "I hope you have a good evening." | `Alarm` | 0.80 |

To address this scenario, use the `2024-06-01-preview` configuration version that's built specifically to address this issue while also maintaining reasonably good quality on `InDomain` utterances.
To address this scenario, use the `2024-08-01-preview` configuration version that's built specifically to address this issue while also maintaining reasonably good quality on `InDomain` utterances.

```console
curl --location 'https://<your-resource>.cognitiveservices.azure.com/language/authoring/analyze-conversations/projects/<your-project>/:train?api-version=2022-10-01-preview' \
Expand All @@ -260,7 +260,7 @@ curl --location 'https://<your-resource>.cognitiveservices.azure.com/language/au
--data '{
      "modelLabel": "<modelLabel>",
      "trainingMode": "advanced",
      "trainingConfigVersion": "2024-06-01-preview",
      "trainingConfigVersion": "2024-08-01-preview",
      "evaluationOptions": {
            "kind": "percentage",
            "testingSplitPercentage": 0,
Expand Down
8 changes: 4 additions & 4 deletions articles/ai-services/openai/api-version-deprecation.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
---
title: Azure OpenAI Service API version retirement
title: Azure OpenAI Service API version lifecycle
description: Learn more about API version retirement in Azure OpenAI Services.
services: cognitive-services
manager: nitinme
ms.service: azure-ai-openai
ms.topic: conceptual
ms.date: 08/14/2024
ms.date: 09/05/2024
author: mrbullwinkle
ms.author: mbullwin
recommendations: false
Expand All @@ -14,10 +14,10 @@ ms.custom:

# Azure OpenAI API preview lifecycle

This article is to help you understand the support lifecycle for the Azure OpenAI API previews. New preview APIs target a monthly release cadence. After February 3rd, 2025, the latest three preview APIs will remain supported while older APIs will no longer be supported unless support is explicitly indicated.
This article is to help you understand the support lifecycle for the Azure OpenAI API previews. New preview APIs target a monthly release cadence. Whenever possible we recommend using either the latest GA, or preview API releases.

> [!NOTE]
> The `2023-06-01-preview` API will remain supported at this time, as `DALL-E 2` is only available in this API version. `DALL-E 3` is supported in the latest API releases. The `2023-10-01-preview` API will also remain supported at this time.
> The `2023-06-01-preview` API and the `2023-10-01-preview` API remain supported at this time.
## Latest preview API releases

Expand Down
7 changes: 4 additions & 3 deletions articles/ai-services/openai/concepts/content-filter.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,16 +79,17 @@ Detecting indirect attacks requires using document delimiters when constructing

---

## Configurability (preview)
## Configurability

The default content filtering configuration for the GPT model series is set to filter at the medium severity threshold for all four content harm categories (hate, violence, sexual, and self-harm) and applies to both prompts (text, multi-modal text/image) and completions (text). This means that content that is detected at severity level medium or high is filtered, while content detected at severity level low isn't filtered by the content filters. For DALL-E, the default severity threshold is set to low for both prompts (text) and completions (images), so content detected at severity levels low, medium, or high is filtered. The configurability feature is available in preview and allows customers to adjust the settings, separately for prompts and completions, to filter content for each content category at different severity levels as described in the table below:
Azure OpenAI Service includes default safety settings applied to all models, excluding Azure OpenAI Whisper. These configurations provide you with a responsible experience by default, including content filtering models, blocklists, prompt transformation, [content credentials](../concepts/content-credentials.md), and others. [Read more about it here](/azure/ai-services/openai/concepts/default-safety-policies). All customers can also configure content filters and create custom safety policies that are tailored to their use case requirements. The configurability feature allows customers to adjust the settings, separately for prompts and completions, to filter content for each content category at different severity levels as described in the table below:

| Severity filtered | Configurable for prompts | Configurable for completions | Descriptions |
|-------------------|--------------------------|------------------------------|--------------|
| Low, medium, high | Yes | Yes | Strictest filtering configuration. Content detected at severity levels low, medium, and high is filtered.|
| Medium, high | Yes | Yes | Content detected at severity level low isn't filtered, content at medium and high is filtered.|
| High | Yes| Yes | Content detected at severity levels low and medium isn't filtered. Only content at severity level high is filtered. Requires approval<sup>1</sup>.|
| High | Yes| Yes | Content detected at severity levels low and medium isn't filtered. Only content at severity level high is filtered. |
| No filters | If approved<sup>1</sup>| If approved<sup>1</sup>| No content is filtered regardless of severity level detected. Requires approval<sup>1</sup>.|
|Annotate only | If approved<sup>1</sup>| If approved<sup>1</sup>| Disables the filter functionality, so content will not be blocked, but annotations are returned via API response. Requires approval<sup>1</sup>.|

<sup>1</sup> For Azure OpenAI models, only customers who have been approved for modified content filtering have full content filtering control and can turn off content filters. Apply for modified content filters via this form: [Azure OpenAI Limited Access Review: Modified Content Filters](https://ncv.microsoft.com/uEfCgnITdR) For Azure Government customers, please apply for modified content filters via this form: [Azure Government - Request Modified Content Filtering for Azure OpenAI Service](https://aka.ms/AOAIGovModifyContentFilter).

Expand Down
18 changes: 10 additions & 8 deletions articles/ai-services/openai/how-to/content-filters.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,28 +21,30 @@ The content filtering system integrated into Azure OpenAI Service runs alongside

Content filters can be configured at resource level. Once a new configuration is created, it can be associated with one or more deployments. For more information about model deployment, see the [resource deployment guide](create-resource.md).

The configurability feature is available in preview and allows customers to adjust the settings, separately for prompts and completions, to filter content for each content category at different severity levels as described in the table below. Content detected at the 'safe' severity level is labeled in annotations but is not subject to filtering and isn't configurable.
The configurability feature allows customers to adjust the settings, separately for prompts and completions, to filter content for each content category at different severity levels as described in the table below. Content detected at the 'safe' severity level is labeled in annotations but is not subject to filtering and isn't configurable.

| Severity filtered | Configurable for prompts | Configurable for completions | Descriptions |
|-------------------|--------------------------|------------------------------|--------------|
| Low, medium, high | Yes | Yes | Strictest filtering configuration. Content detected at severity levels low, medium, and high is filtered.|
| Medium, high | Yes | Yes | Default setting. Content detected at severity level low isn't filtered, content at medium and high is filtered.|
| Low, medium, high | Yes | Yes | Strictest filtering configuration. Content detected at severity levels low, medium, and high is filtered. |
| Medium, high | Yes | Yes | Content detected at severity level low isn't filtered, content at medium and high is filtered. |
| High | Yes| Yes | Content detected at severity levels low and medium isn't filtered. Only content at severity level high is filtered. |
| No filters | If approved<sup>\*</sup>| If approved<sup>\*</sup>| No content is filtered regardless of severity level detected. Requires approval<sup>\*</sup>.|
|Annotate only | If approved<sup>\*</sup>| If approved<sup>\*</sup>| Disables the filter functionality, so content will not be blocked, but annotations are returned via API response. Requires approval<sup>\*</sup>|

<sup>\*</sup> Only approved customers have full content filtering control and can turn the content filters partially or fully off. Managed customers only can apply for full content filtering control via this form: [Azure OpenAI Limited Access Review: Modified Content Filters](https://ncv.microsoft.com/uEfCgnITdR). At this time, it is not possible to become a managed customer.

Customers are responsible for ensuring that applications integrating Azure OpenAI comply with the [Code of Conduct](/legal/cognitive-services/openai/code-of-conduct?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext).


|Filter category |Default setting |Applied to prompt or completion? |Description |
|Filter category |Status |Default setting |Applied to prompt or completion? |Description |
|---------|---------|---------|---------|
|Jailbreak risk detection | Off | Prompt | Can be turned on to filter or annotate user prompts that might present a Jailbreak Risk. For more information about consuming annotations, visit [Azure OpenAI Service content filtering](/azure/ai-services/openai/concepts/content-filter?tabs=python#annotations-preview) |
| Protected material - code | off | Completion | Can be turned on to get the example citation and license information in annotations for code snippets that match any public code sources. For more information about consuming annotations, see the [content filtering concepts guide](/azure/ai-services/openai/concepts/content-filter#annotations-preview) |
| Protected material - text | off | Completion | Can be turned on to identify and block known text content from being displayed in the model output (for example, song lyrics, recipes, and selected web content). |
|Prompt Shields for direct attacks (jailbreak) |GA| On | User prompt | Filters / annotates user prompts that might present a Jailbreak Risk. For more information about annotations, visit [Azure OpenAI Service content filtering](/azure/ai-services/openai/concepts/content-filter?tabs=python#annotations-preview). |
|Prompt Shields for indirect attacks | GA| On| User prompt | Filter / annotate Indirect Attacks, also referred to as Indirect Prompt Attacks or Cross-Domain Prompt Injection Attacks, a potential vulnerability where third parties place malicious instructions inside of documents that the generative AI system can access and process. Required: [Document ](/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cuser-prompt%2Cpython-new#embedding-documents-in-your-prompt)formatting. |
| Protected material - code |GA| On | Completion | Filters protected code or gets the example citation and license information in annotations for code snippets that match any public code sources, powered by GitHub Copilot. For more information about consuming annotations, see the [content filtering concepts guide](/azure/ai-services/openai/concepts/content-filter#annotations-preview) |
| Protected material - text | GA| On | Completion | Identifies and blocks known text content from being displayed in the model output (for example, song lyrics, recipes, and selected web content). |


## Configuring content filters via Azure OpenAI Studio (preview)
## Configuring content filters via Azure OpenAI Studio

The following steps show how to set up a customized content filtering configuration for your resource.

Expand Down
Loading

0 comments on commit 214d392

Please sign in to comment.