rezzzy commited on
Commit
970e363
·
verified ·
1 Parent(s): d568002

Align model card with generation usage

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -8,7 +8,7 @@ datasets:
8
  - GeneralAnalysis/GA_Guardrail_Benchmark
9
  base_model:
10
  - meta-llama/Llama-3.2-1B-Instruct
11
- pipeline_tag: text-classification
12
  library_name: transformers
13
  tags:
14
  - Moderation
@@ -51,6 +51,8 @@ The model outputs one structured token for each category, such as `<prompt_secur
51
 
52
  The tokenizer chat template bakes in the guard system prompt and automatically prefixes user content with `text:`, matching the GA Guard Core public template and the training format. Callers only need to provide the text to classify as a user message.
53
 
 
 
54
  ### Transformers
55
 
56
  ```python
 
8
  - GeneralAnalysis/GA_Guardrail_Benchmark
9
  base_model:
10
  - meta-llama/Llama-3.2-1B-Instruct
11
+ pipeline_tag: text-generation
12
  library_name: transformers
13
  tags:
14
  - Moderation
 
51
 
52
  The tokenizer chat template bakes in the guard system prompt and automatically prefixes user content with `text:`, matching the GA Guard Core public template and the training format. Callers only need to provide the text to classify as a user message.
53
 
54
+ > **Note:** GA Guard 1B is implemented as a `LlamaForCausalLM`. It performs classification by generating the guard label tokens, so use `AutoModelForCausalLM`, `tokenizer.apply_chat_template`, or a text-generation server such as vLLM rather than the Hugging Face `text-classification` pipeline.
55
+
56
  ### Transformers
57
 
58
  ```python