PEFT
Safetensors
generated_from_trainer
ysy970923 commited on
Commit
03ff9d1
1 Parent(s): 13ebd85

phi-2-prompt-injection-QLoRA

Browse files
README.md CHANGED
@@ -7,18 +7,6 @@ base_model: microsoft/phi-2
7
  model-index:
8
  - name: phi-2-prompt-injection-QLoRA
9
  results: []
10
- datasets:
11
- - HuggingFaceH4/no_robots
12
- - Dahoas/synthetic-hh-rlhf-prompts
13
- - HuggingFaceH4/ultrachat_200k
14
- - Lakera/gandalf_ignore_instructions
15
- - imoxto/prompt_injection_cleaned_dataset-v2
16
- - hackaprompt/hackaprompt-dataset
17
- - rubend18/ChatGPT-Jailbreak-Prompts
18
- language:
19
- - en
20
- metrics:
21
- - accuracy
22
  ---
23
 
24
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -28,14 +16,14 @@ should probably proofread and complete it, then remove this comment. -->
28
 
29
  This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
30
  It achieves the following results on the evaluation set:
31
- - eval_loss: 0.0845
32
- - eval_precision: 0.9924
33
- - eval_recall: 0.9924
34
- - eval_f1-score: 0.9924
35
- - eval_accuracy: 0.9852
36
- - eval_runtime: 15.9089
37
- - eval_samples_per_second: 8.486
38
- - eval_steps_per_second: 1.069
39
  - step: 0
40
 
41
  ## Model description
@@ -44,11 +32,7 @@ More information needed
44
 
45
  ## Intended uses & limitations
46
 
47
- ```
48
- tokenizer = AutoTokenizer.from_pretrained("ysy970923/phi-2-prompt-injection-QLoRA")
49
- model = AutoModelForSequenceClassification.from_pretrained("ysy970923/phi-2-prompt-injection-QLoRA", load_in_4bit=True, torch_dtype=torch.bfloat16, id2label={0: "SAFE", 1: "INJECTION"})
50
- # LABEL_0 is safe, LABEL_1 is prompt_injection
51
- ```
52
 
53
  ## Training and evaluation data
54
 
 
7
  model-index:
8
  - name: phi-2-prompt-injection-QLoRA
9
  results: []
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
16
 
17
  This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
+ - eval_loss: 0.0000
20
+ - eval_precision: 1.0
21
+ - eval_recall: 1.0
22
+ - eval_f1-score: 1.0
23
+ - eval_accuracy: 1.0
24
+ - eval_runtime: 16.0258
25
+ - eval_samples_per_second: 8.424
26
+ - eval_steps_per_second: 1.061
27
  - step: 0
28
 
29
  ## Model description
 
32
 
33
  ## Intended uses & limitations
34
 
35
+ More information needed
 
 
 
 
36
 
37
  ## Training and evaluation data
38
 
adapter_config.json CHANGED
@@ -20,8 +20,8 @@
20
  "revision": null,
21
  "target_modules": [
22
  "v_proj",
23
- "k_proj",
24
- "q_proj"
25
  ],
26
  "task_type": "SEQ_CLS",
27
  "use_rslora": false
 
20
  "revision": null,
21
  "target_modules": [
22
  "v_proj",
23
+ "q_proj",
24
+ "k_proj"
25
  ],
26
  "task_type": "SEQ_CLS",
27
  "use_rslora": false
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:516e58b719f01c6cbf482e86ceeae52e4edeeeb7097da527c5f21c7c87036c54
3
  size 31503624
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f3b18b734addf17df5b076584597070234de62592558854002b9a6ec228f79a2
3
  size 31503624
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:38944c8d9f5c1431db7e99cddb755dce36b31106a2e870c3d6f86d547e0c82c9
3
  size 4856
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2e695bc3cf97c9e7b12dacee1b4b7f442e03d12888c4f127e5ade30c3e997e3a
3
  size 4856