Skip to content

Commit

Permalink
Merge pull request #137 from PatrickFarley/consaf-updates
Browse files Browse the repository at this point in the history
preview tags
  • Loading branch information
AnnaMHuff authored Sep 5, 2024
2 parents 7e9ab82 + 92fc0c6 commit 116e770
Show file tree
Hide file tree
Showing 9 changed files with 19 additions and 19 deletions.
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: "Custom categories in Azure AI Content Safety"
title: "Custom categories in Azure AI Content Safety (preview)"
titleSuffix: Azure AI services
description: Learn about custom content categories and the different ways you can use Azure AI Content Safety to handle them on your platform.
#services: cognitive-services
Expand All @@ -12,7 +12,7 @@ ms.date: 07/05/2024
ms.author: pafarley
---

# Custom categories
# Custom categories (preview)

Azure AI Content Safety lets you create and manage your own content moderation categories for enhanced moderation and filtering that matches your specific policies or use cases.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: "Use the custom categories (rapid) API"
title: "Use the custom categories (rapid) API (preview)"
titleSuffix: Azure AI services
description: Learn how to use the custom categories (rapid) API to mitigate harmful content incidents quickly.
#services: cognitive-services
Expand All @@ -13,7 +13,7 @@ ms.author: pafarley
---


# Use the custom categories (rapid) API
# Use the custom categories (rapid) API (preview)

The custom categories (rapid) API lets you quickly respond to emerging harmful content incidents. You can define an incident with a few examples in a specific topic, and the service will start detecting similar content.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: "Use the custom category API"
title: "Use the custom category API (preview)"
titleSuffix: Azure AI services
description: Learn how to use the custom category API to create your own harmful content categories and train the Content Safety model for your use case.
#services: cognitive-services
Expand All @@ -12,7 +12,7 @@ ms.date: 04/11/2024
ms.author: pafarley
---

# Use the custom categories (standard) API
# Use the custom categories (standard) API (preview)


The custom categories (standard) API lets you create your own content categories for your use case and train Azure AI Content Safety to detect them in new content.
Expand Down
8 changes: 4 additions & 4 deletions articles/ai-services/content-safety/index.yml
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ landingContent:
links:
- text: Harm categories
url: concepts/harm-categories.md
- text: Custom categories
- text: Custom categories (preview)
url: concepts/custom-categories.md
- linkListType: quickstart
links:
Expand All @@ -54,7 +54,7 @@ landingContent:
url: quickstart-image.md?pivots=programming-language-rest
- linkListType: how-to-guide
links:
- text: Use custom categories
- text: Use custom categories (preview)
url: how-to/custom-categories.md


Expand All @@ -64,7 +64,7 @@ landingContent:
links:
- text: Harm categories
url: concepts/harm-categories.md
- text: Custom categories
- text: Custom categories (preview)
url: concepts/custom-categories.md
- text: Groundedness detection
url: concepts/groundedness.md
Expand All @@ -78,7 +78,7 @@ landingContent:
url: quickstart-groundedness.md
- linkListType: how-to-guide
links:
- text: Use custom categories
- text: Use custom categories (preview)
url: how-to/custom-categories.md
- text: Use a blocklist
url: how-to/use-blocklist.md
Expand Down
2 changes: 1 addition & 1 deletion articles/ai-services/content-safety/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ See the following list for the input requirements for each feature.
- **Protected material detection API**:
- Default maximum length: 1K characters.
- Default minimum length: 110 characters (for scanning LLM completions, not user prompts).
- **Custom categories (standard) API**:
- **Custom categories (standard) API (preview)**:
- Maximum inference input length: 1K characters.


Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: "Quickstart: Custom categories"
title: "Quickstart: Custom categories (preview)"
titleSuffix: Azure AI services
description: Use the custom category API to create your own harmful content categories and train the Content Safety model for your use case.
#services: cognitive-services
Expand All @@ -11,7 +11,7 @@ ms.date: 07/03/2024
ms.author: pafarley
---

# Quickstart: Custom categories (standard mode)
# Quickstart: Custom categories (standard mode) (preview)

Follow this guide to use Azure AI Content Safety Custom category REST API to create your own content categories for your use case and train Azure AI Content Safety to detect them in new text content.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ ms.date: 03/15/2024
ms.author: pafarley
---

# Quickstart: Prompt Shields (preview)
# Quickstart: Prompt Shields

"Prompt Shields" in Azure AI Content Safety are specifically designed to safeguard generative AI systems from generating harmful or inappropriate content. These shields detect and mitigate risks associated with both User Prompt Attacks (malicious or harmful user-generated inputs) and Document Attacks (inputs containing harmful content embedded within documents). The use of "Prompt Shields" is crucial in environments where GenAI is employed, ensuring that AI outputs remain safe, compliant, and trustworthy.

Expand Down
6 changes: 3 additions & 3 deletions articles/ai-services/content-safety/toc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ items:
href: concepts/groundedness.md
- name: Protected material detection
href: concepts/protected-material.md
- name: Custom categories
- name: Custom categories (preview)
href: concepts/custom-categories.md
- name: Harm categories
href: concepts/harm-categories.md
Expand All @@ -35,7 +35,7 @@ items:
href: quickstart-groundedness.md
- name: Protected material detection
href: quickstart-protected-material.md
- name: Custom categories
- name: Custom categories (preview)
href: quickstart-custom-categories.md
- name: Content Safety Studio
href: studio-quickstart.md
Expand All @@ -57,7 +57,7 @@ items:
href: /legal/cognitive-services/content-safety/data-privacy?context=/azure/ai-services/content-safety/context/context
- name: How-to guides
items:
- name: Use custom categories
- name: Use custom categories (preview)
href: how-to/custom-categories.md
- name: Use a blocklist
href: how-to/use-blocklist.md
Expand Down
4 changes: 2 additions & 2 deletions articles/ai-services/content-safety/whats-new.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,13 +18,13 @@ Learn what's new in the service. These items might be release notes, videos, blo

## July 2024

### Custom categories (standard) API
### Custom categories (standard) API public preview

The custom categories (standard) API lets you create and train your own custom content categories and scan text for matches. See [Custom categories](./concepts/custom-categories.md) to learn more.

## May 2024

### Custom categories (rapid) API
### Custom categories (rapid) API public preview

The custom categories (rapid) API lets you quickly define emerging harmful content patterns and scan text and images for matches. See [Custom categories](./concepts/custom-categories.md) to learn more.

Expand Down

0 comments on commit 116e770

Please sign in to comment.