cygu commited on
Commit
ad66e82
1 Parent(s): fe3645f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -25
README.md CHANGED
@@ -3,31 +3,13 @@ tags:
3
  - generated_from_trainer
4
  datasets:
5
  - openwebtext
6
- model-index:
7
- - name: llama-2-7b-hf-distill-aaronson-k4-lr1e-5-decayto0
8
- results: []
9
  ---
10
 
11
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
- should probably proofread and complete it, then remove this comment. -->
13
-
14
- # llama-2-7b-hf-distill-aaronson-k4-lr1e-5-decayto0
15
-
16
- This model is a fine-tuned version of [/scr-ssd/cygu/weights/Llama-2-7b-hf/](https://huggingface.co//scr-ssd/cygu/weights/Llama-2-7b-hf/) on the openwebtext dataset.
17
-
18
  ## Model description
19
 
20
- More information needed
21
-
22
- ## Intended uses & limitations
23
-
24
- More information needed
25
 
26
- ## Training and evaluation data
27
-
28
- More information needed
29
-
30
- ## Training procedure
31
 
32
  ### Training hyperparameters
33
 
@@ -45,13 +27,9 @@ The following hyperparameters were used during training:
45
  - lr_scheduler_warmup_steps: 500
46
  - training_steps: 5000
47
 
48
- ### Training results
49
-
50
-
51
-
52
  ### Framework versions
53
 
54
  - Transformers 4.29.2
55
  - Pytorch 2.0.1+cu117
56
  - Datasets 2.13.1
57
- - Tokenizers 0.13.3
 
3
  - generated_from_trainer
4
  datasets:
5
  - openwebtext
6
+ license: llama2
 
 
7
  ---
8
 
 
 
 
 
 
 
 
9
  ## Model description
10
 
11
+ Logits-based watermark distilled Llama 2 7B using the Aar \\(k=4\\) watermarking strategy in the paper [On the Learnability of Watermarks for Language Models](https://arxiv.org/abs/2312.04469).
 
 
 
 
12
 
 
 
 
 
 
13
 
14
  ### Training hyperparameters
15
 
 
27
  - lr_scheduler_warmup_steps: 500
28
  - training_steps: 5000
29
 
 
 
 
 
30
  ### Framework versions
31
 
32
  - Transformers 4.29.2
33
  - Pytorch 2.0.1+cu117
34
  - Datasets 2.13.1
35
+ - Tokenizers 0.13.3