cygu commited on
Commit
c0581d7
1 Parent(s): 7f2c1c3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -26
README.md CHANGED
@@ -1,31 +1,12 @@
1
  ---
2
  tags:
3
  - generated_from_trainer
4
- model-index:
5
- - name: aaronson_k4_llama-2-7b-hf-lr1e-5
6
- results: []
7
  ---
8
 
9
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
10
- should probably proofread and complete it, then remove this comment. -->
11
-
12
- # aaronson_k4_llama-2-7b-hf-lr1e-5
13
-
14
- This model is a fine-tuned version of [/scr-ssd/cygu/weights/Llama-2-7b-hf/](https://huggingface.co//scr-ssd/cygu/weights/Llama-2-7b-hf/) on an unknown dataset.
15
-
16
  ## Model description
17
 
18
- More information needed
19
-
20
- ## Intended uses & limitations
21
-
22
- More information needed
23
-
24
- ## Training and evaluation data
25
-
26
- More information needed
27
-
28
- ## Training procedure
29
 
30
  ### Training hyperparameters
31
 
@@ -44,13 +25,9 @@ The following hyperparameters were used during training:
44
  - lr_scheduler_warmup_steps: 500
45
  - num_epochs: 1.0
46
 
47
- ### Training results
48
-
49
-
50
-
51
  ### Framework versions
52
 
53
  - Transformers 4.29.2
54
  - Pytorch 2.0.1+cu117
55
  - Datasets 2.13.1
56
- - Tokenizers 0.13.3
 
1
  ---
2
  tags:
3
  - generated_from_trainer
4
+ license: llama2
 
 
5
  ---
6
 
 
 
 
 
 
 
 
7
  ## Model description
8
 
9
+ Sampling-based watermark distilled Llama 2 7B using the Aar \\(k=4\\) watermarking strategy in the paper [On the Learnability of Watermarks for Language Models](https://arxiv.org/abs/2312.04469).
 
 
 
 
 
 
 
 
 
 
10
 
11
  ### Training hyperparameters
12
 
 
25
  - lr_scheduler_warmup_steps: 500
26
  - num_epochs: 1.0
27
 
 
 
 
 
28
  ### Framework versions
29
 
30
  - Transformers 4.29.2
31
  - Pytorch 2.0.1+cu117
32
  - Datasets 2.13.1
33
+ - Tokenizers 0.13.3