cygu commited on
Commit
53a84d3
1 Parent(s): ce4f8f0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -25
README.md CHANGED
@@ -1,31 +1,12 @@
1
  ---
2
  tags:
3
  - generated_from_trainer
4
- model-index:
5
- - name: llama-2-7b-sampling-watermark-distill-kgw-k2-delta2-gamma0.25
6
- results: []
7
  ---
8
 
9
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
10
- should probably proofread and complete it, then remove this comment. -->
11
-
12
- # llama-2-7b-sampling-watermark-distill-kgw-k2-delta2-gamma0.25
13
-
14
- This model is a fine-tuned version of [/scr-ssd/cygu/weights/Llama-2-7b-hf/](https://huggingface.co//scr-ssd/cygu/weights/Llama-2-7b-hf/) on an unknown dataset.
15
-
16
  ## Model description
17
 
18
- More information needed
19
-
20
- ## Intended uses & limitations
21
-
22
- More information needed
23
-
24
- ## Training and evaluation data
25
-
26
- More information needed
27
-
28
- ## Training procedure
29
 
30
  ### Training hyperparameters
31
 
@@ -44,10 +25,6 @@ The following hyperparameters were used during training:
44
  - lr_scheduler_warmup_steps: 500
45
  - num_epochs: 1.0
46
 
47
- ### Training results
48
-
49
-
50
-
51
  ### Framework versions
52
 
53
  - Transformers 4.29.2
 
1
  ---
2
  tags:
3
  - generated_from_trainer
4
+ license: llama2
 
 
5
  ---
6
 
 
 
 
 
 
 
 
7
  ## Model description
8
 
9
+ Sampling-based watermark distilled Llama 2 7B using the KGW \\(k=2, \gamma=0.25, \delta=2\\) watermarking strategy in the paper [On the Learnability of Watermarks for Language Models](https://arxiv.org/abs/2312.04469).
 
 
 
 
 
 
 
 
 
 
10
 
11
  ### Training hyperparameters
12
 
 
25
  - lr_scheduler_warmup_steps: 500
26
  - num_epochs: 1.0
27
 
 
 
 
 
28
  ### Framework versions
29
 
30
  - Transformers 4.29.2