munish0838 commited on
Commit
f125d5e
1 Parent(s): a6e7e59

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -8
README.md CHANGED
@@ -16,14 +16,6 @@ base_model: gradientai/Llama-3-8B-Instruct-Gradient-1048k
16
 
17
  # Model Description
18
 
19
- Join our custom agent and long context (262k-1M+) waitlist: https://forms.gle/L6TDY7dozx8TuoUv7
20
-
21
- Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message contact@gradient.ai.
22
-
23
- For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab)
24
-
25
- [Join our Discord](https://discord.com/invite/2QVy2qt2mf)
26
-
27
  This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data.
28
 
29
  **Update (5/3): We further fine-tuned our model to strengthen its assistant-like chat ability as well. The NIAH result is updated.**
 
16
 
17
  # Model Description
18
 
 
 
 
 
 
 
 
 
19
  This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data.
20
 
21
  **Update (5/3): We further fine-tuned our model to strengthen its assistant-like chat ability as well. The NIAH result is updated.**