andreaskoepf commited on
Commit
8a82537
1 Parent(s): 7f844fd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -2
README.md CHANGED
@@ -16,7 +16,6 @@ Note: **At least Huggingface Transformers [4.31.0](https://pypi.org/project/tran
16
  - checkpoint: 3319 steps
17
  - datatpye: fp16
18
 
19
-
20
  ## Long context (RoPE Scaling)
21
 
22
  This model was fine-tuned with a context size of 8192 tokens using linear scaling of RoPE embeddings. This feature was recently
@@ -123,9 +122,18 @@ llama2_13b_orca_8k:
123
  peft_model: false
124
  ```
125
 
 
 
 
 
 
 
126
  # Special Thanks
127
 
128
- We want to especially thank Eric Hardford for replicating ORCA and making it publicly available at [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin)!
 
 
 
129
 
130
  # License
131
 
 
16
  - checkpoint: 3319 steps
17
  - datatpye: fp16
18
 
 
19
  ## Long context (RoPE Scaling)
20
 
21
  This model was fine-tuned with a context size of 8192 tokens using linear scaling of RoPE embeddings. This feature was recently
 
122
  peft_model: false
123
  ```
124
 
125
+ # Developers
126
+
127
+ - [shahules786](https://github.com/shahules786)
128
+ - [jordicli](https://github.com/jordiclive)
129
+ - [andreaskoepf](https://github.com/andreaskoepf/)
130
+
131
  # Special Thanks
132
 
133
+ We want to especially thank Eric Hardford who spared no expense in replicating ORCA and making it available at [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin)!
134
+ Also shoutout to the whole team working on [LLongMA-2-13b](https://huggingface.co/conceptofmind/LLongMA-2-13b) & the [scaled-rope](https://github.com/jquesnelle/scaled-rope) repository for their awesome work: bloc97, jquesnelle & conceptofmind!
135
+
136
+ The whole Open-Assistant team is very grateful for the continued support of [Redmond AI](https://redmond.ai/) who sponsored the training compute for this model.
137
 
138
  # License
139