andreaskoepf commited on
Commit
0e3cc6b
1 Parent(s): c4a9aad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -12,9 +12,11 @@ Note: **At least Huggingface Transformers [4.31.0](https://pypi.org/project/tran
12
 
13
  - base model: [meta-llama/Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b)
14
  - License: [Llama 2 Community License Agreement](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
 
15
  - wandb: [public-sft/runs/2jfazjt9](https://wandb.ai/open-assistant/public-sft/runs/2jfazjt9)
16
  - checkpoint: 3319 steps
17
  - datatpye: fp16
 
18
 
19
  ## Long context (RoPE Scaling)
20
 
@@ -133,7 +135,7 @@ llama2_13b_orca_8k:
133
  We want to especially thank Eric Hardford who spared no expense in replicating ORCA and making it available at [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin)!
134
  Also shoutout to the whole team working on [LLongMA-2-13b](https://huggingface.co/conceptofmind/LLongMA-2-13b) & the [scaled-rope](https://github.com/jquesnelle/scaled-rope) repository for their awesome work: bloc97, jquesnelle & conceptofmind!
135
 
136
- The whole Open-Assistant team is very grateful for the continued support of [Redmond AI](https://redmond.ai/) who sponsored the training compute for this model.
137
 
138
  # License
139
 
 
12
 
13
  - base model: [meta-llama/Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b)
14
  - License: [Llama 2 Community License Agreement](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
15
+ - sampling report: TBD
16
  - wandb: [public-sft/runs/2jfazjt9](https://wandb.ai/open-assistant/public-sft/runs/2jfazjt9)
17
  - checkpoint: 3319 steps
18
  - datatpye: fp16
19
+ - sponsored by: [Redmond.ai](https://redmond.ai/)
20
 
21
  ## Long context (RoPE Scaling)
22
 
 
135
  We want to especially thank Eric Hardford who spared no expense in replicating ORCA and making it available at [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin)!
136
  Also shoutout to the whole team working on [LLongMA-2-13b](https://huggingface.co/conceptofmind/LLongMA-2-13b) & the [scaled-rope](https://github.com/jquesnelle/scaled-rope) repository for their awesome work: bloc97, jquesnelle & conceptofmind!
137
 
138
+ The whole Open-Assistant team is very grateful for the continued support of [Redmond.ai](https://redmond.ai/) who sponsored the training compute for this model.
139
 
140
  # License
141