andreaskoepf commited on
Commit
ec69d63
1 Parent(s): 8808e0f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -5
README.md CHANGED
@@ -1,17 +1,28 @@
1
  ---
2
  license: other
 
 
 
 
 
3
  ---
4
 
5
  - **At least Huggingface Transformers [4.31.0](https://pypi.org/project/transformers/4.31.0/) is required to load this model!**
6
- - datatpye: fp16
7
- - [wand](https://wandb.ai/open-assistant/supervised-finetuning/runs/2jfazjt9) (still internal, needs to be moved to public-sft)
 
8
  - checkpoint: 3319 steps
 
 
9
 
 
10
 
11
- ## Long context via RoPE scaling
 
 
 
 
12
 
13
- This model was fine-tuned with a context size of 8192 tokens using linear scaling of RoPE embeddings. In order to load this model
14
- HF Transformers >=4.31.0 is required.
15
 
16
 
17
  ## Model Configuration
@@ -65,3 +76,7 @@ llama2_13b_orca_8k:
65
  peft_model: false
66
  ```
67
 
 
 
 
 
 
1
  ---
2
  license: other
3
+ datasets:
4
+ - ehartford/dolphin
5
+ - shahules786/orca-chat
6
+ - togethercomputer/RedPajama-Data-1T
7
+ - atom-in-the-universe/fanfics-10k-50k
8
  ---
9
 
10
  - **At least Huggingface Transformers [4.31.0](https://pypi.org/project/transformers/4.31.0/) is required to load this model!**
11
+ - base model: [meta-llama/Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b)
12
+ - License: [Llama 2 Community License Agreement](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
13
+ - [wand](https://wandb.ai/open-assistant/public-sft/runs/2jfazjt9)
14
  - checkpoint: 3319 steps
15
+ - datatpye: fp16
16
+
17
 
18
+ ## Long context (RoPE Scaling)
19
 
20
+ This model was fine-tuned with a context size of 8192 tokens using linear scaling of RoPE embeddings. This feature has been added only
21
+ very recently to the Huggingface transformers library. to In order to load this model HF Transformers >=4.31.0 is required
22
+ (`pip install transformers>=4.31.0`).
23
+
24
+ ## Orca-Chat/Dolphin Datasets
25
 
 
 
26
 
27
 
28
  ## Model Configuration
 
76
  peft_model: false
77
  ```
78
 
79
+ # License
80
+
81
+ - Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.
82
+ - Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the [Acceptable Use Policy](https://ai.meta.com/llama/use-policy) for the Llama Materials.