markpreemo commited on
Commit
f8efdef
1 Parent(s): 2143104

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -17,6 +17,8 @@ Gradient incorporates your data to deploy autonomous assistants that power criti
17
 
18
  For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab)
19
 
 
 
20
  This model extends LLama-3 70B's context length from 8k to > 262K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 105M tokens for this stage, and 188M tokens total for all stages, which is < 0.002% of Llama-3's original pre-training data.
21
 
22
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6585dc9be92bc5f258156bd6/Ueev-bujAWFusU2uEcy_G.png)
 
17
 
18
  For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab)
19
 
20
+ [Join our Discord](https://discord.com/invite/2QVy2qt2mf)
21
+
22
  This model extends LLama-3 70B's context length from 8k to > 262K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 105M tokens for this stage, and 188M tokens total for all stages, which is < 0.002% of Llama-3's original pre-training data.
23
 
24
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6585dc9be92bc5f258156bd6/Ueev-bujAWFusU2uEcy_G.png)