fierysurf commited on
Commit
051f4c9
1 Parent(s): 417642d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -217,7 +217,7 @@ extra_gated_description: >-
217
  Policy](https://www.facebook.com/privacy/policy/).
218
  extra_gated_button_content: Submit
219
  ---
220
- (Note: This is a INT4 executorch exported model. To learn more about how to run this model on edge devices checkout the following resource [here](https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard3/1B/ET_INSTRUCTIONS.md))
221
  ## Model Information
222
 
223
  Llama Guard 3-1B is a fine-tuned Llama-3.2-1B pretrained model for content safety classification. Similar to previous versions, it can be used to classify content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM – it generates text in its output that indicates whether a given prompt or response is safe or unsafe, and if unsafe, it also lists the content categories violated.
 
217
  Policy](https://www.facebook.com/privacy/policy/).
218
  extra_gated_button_content: Submit
219
  ---
220
+ (Note: This is an AWQ 4-bit exported model. To learn more about how to run this model on edge devices checkout the following resource [here](https://github.com/casper-hansen/AutoAWQ/blob/main/docs/examples.md))
221
  ## Model Information
222
 
223
  Llama Guard 3-1B is a fine-tuned Llama-3.2-1B pretrained model for content safety classification. Similar to previous versions, it can be used to classify content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM – it generates text in its output that indicates whether a given prompt or response is safe or unsafe, and if unsafe, it also lists the content categories violated.