Suparious commited on
Commit
7648ccb
·
verified ·
1 Parent(s): 0193a82

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -0
README.md CHANGED
@@ -15,7 +15,15 @@ quantized_by: Suparious
15
  - Model creator: [gradientai](https://huggingface.co/gradientai)
16
  - Original model: [Llama-3-8B-Instruct-Gradient-1048k](https://huggingface.co/gradientai/Llama-3-8B-Instruct-Gradient-1048k)
17
 
 
18
 
 
 
 
 
 
 
 
19
 
20
  ## How to use
21
 
 
15
  - Model creator: [gradientai](https://huggingface.co/gradientai)
16
  - Original model: [Llama-3-8B-Instruct-Gradient-1048k](https://huggingface.co/gradientai/Llama-3-8B-Instruct-Gradient-1048k)
17
 
18
+ <a href="https://www.gradient.ai" target="_blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/></a>
19
 
20
+ ## Model Summary
21
+
22
+ Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message contact@gradient.ai.
23
+
24
+ For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab)
25
+
26
+ This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data.
27
 
28
  ## How to use
29