Update README.md
Browse files
README.md
CHANGED
@@ -59,3 +59,8 @@ The model outperforms most popular models with significantly lower inference lat
|
|
59 |
|
60 |
## Ethical Considerations and Limitations
|
61 |
The use of model-based guardrails for Large Language Models (LLMs) involves risks and ethical considerations people must be aware of. This model operates on chunks of texts and provides a score indicating the presence of hate speech, abuse, or profanity. However, the efficacy of the model can be limited by several factors: the potential inability to capture nuanced meanings or the risk of false positives or negatives on text that is dissimilar to the training data. Previous research has demonstrated the risk of various biases in toxicity or hate speech detection. That is also relevant to this work. We urge the community to use this model with ethical intentions and in a responsible way.
|
|
|
|
|
|
|
|
|
|
|
|
59 |
|
60 |
## Ethical Considerations and Limitations
|
61 |
The use of model-based guardrails for Large Language Models (LLMs) involves risks and ethical considerations people must be aware of. This model operates on chunks of texts and provides a score indicating the presence of hate speech, abuse, or profanity. However, the efficacy of the model can be limited by several factors: the potential inability to capture nuanced meanings or the risk of false positives or negatives on text that is dissimilar to the training data. Previous research has demonstrated the risk of various biases in toxicity or hate speech detection. That is also relevant to this work. We urge the community to use this model with ethical intentions and in a responsible way.
|
62 |
+
|
63 |
+
### Resources
|
64 |
+
- ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite
|
65 |
+
- 📄 Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/
|
66 |
+
- 💡 Learn about the latest Granite learning resources: https://ibm.biz/granite-learning-resources
|