cygu commited on
Commit
0812bac
1 Parent(s): 9f9c5b2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -21
README.md CHANGED
@@ -3,31 +3,12 @@ tags:
3
  - generated_from_trainer
4
  datasets:
5
  - openwebtext
6
- model-index:
7
- - name: llama-2-7b-logits-watermark-distill-kgw-k2-gamma0.25-delta2
8
- results: []
9
  ---
10
 
11
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
- should probably proofread and complete it, then remove this comment. -->
13
-
14
- # llama-2-7b-logits-watermark-distill-kgw-k2-gamma0.25-delta2
15
-
16
- This model is a fine-tuned version of [/scr-ssd/cygu/weights/Llama-2-7b-hf/](https://huggingface.co//scr-ssd/cygu/weights/Llama-2-7b-hf/) on the openwebtext dataset.
17
-
18
  ## Model description
19
 
20
- More information needed
21
-
22
- ## Intended uses & limitations
23
-
24
- More information needed
25
-
26
- ## Training and evaluation data
27
-
28
- More information needed
29
-
30
- ## Training procedure
31
 
32
  ### Training hyperparameters
33
 
 
3
  - generated_from_trainer
4
  datasets:
5
  - openwebtext
6
+ license: llama2
 
 
7
  ---
8
 
 
 
 
 
 
 
 
9
  ## Model description
10
 
11
+ Logit-based watermark distilled Llama 2 7B using the KGW \\(k=2, \gamma=0.25, \delta=2\\) watermarking strategy in the paper [On the Learnability of Watermarks for Language Models](https://arxiv.org/abs/2312.04469).
 
 
 
 
 
 
 
 
 
 
12
 
13
  ### Training hyperparameters
14