zhenggq commited on
Commit
426362f
1 Parent(s): d801fd2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -102,9 +102,9 @@ analysis is needed to assess potential harm or bias in the proposed application.
102
 
103
  **Safe inference with Azure AI Content Safety**
104
 
105
- The usage of Azure AI Content Safety on top of model prediction is strongly encouraged
106
  and can help prevent content harms. Azure AI Content Safety is a content moderation platform
107
- that uses AI to keep your content safe. By integrating Orca with Azure AI Content Safety,
108
  we can moderate the model output by scanning it for sexual content, violence, hate, and
109
  self-harm with multiple severity levels and multi-lingual detection.
110
 
 
102
 
103
  **Safe inference with Azure AI Content Safety**
104
 
105
+ The usage of [Azure AI Content Safety](https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety/) on top of model prediction is strongly encouraged
106
  and can help prevent content harms. Azure AI Content Safety is a content moderation platform
107
+ that uses AI to keep your content safe. By integrating Orca 2 with Azure AI Content Safety,
108
  we can moderate the model output by scanning it for sexual content, violence, hate, and
109
  self-harm with multiple severity levels and multi-lingual detection.
110