Update README.md
Browse files
README.md
CHANGED
@@ -22,9 +22,9 @@ _Fast Inference with Customization:_ As with our previous version, once trained,
|
|
22 |
|
23 |
- **Github:** https://github.com/slicex-ai/elm-turbo
|
24 |
|
25 |
-
- **HuggingFace** (access ELM Turbo Models in HF): π [here](https://huggingface.co/collections/slicexai/elm-turbo-
|
26 |
|
27 |
-
## ELM Turbo Model Release
|
28 |
In this version, we employed our new, improved decomposable ELM techniques on a widely used open-source LLM, `meta-llama/Meta-Llama-3.1-8B-Instruct` (8B params) (check [Llama-license](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B/blob/main/LICENSE) for usage). After training, we generated three smaller slices with parameter counts ranging from 3B billion to 6B billion.
|
29 |
|
30 |
- [Section 1.](https://huggingface.co/slicexai/Llama3.1-elm-turbo-4B-instruct#1-run-elm-turbo-models-with-huggingface-transformers-library) π instructions to run ELM-Turbo with the Huggingface Transformers library.
|
|
|
22 |
|
23 |
- **Github:** https://github.com/slicex-ai/elm-turbo
|
24 |
|
25 |
+
- **HuggingFace** (access ELM Turbo Models in HF): π [here](https://huggingface.co/collections/slicexai/llama31-elm-turbo-66a81aa5f6bcb0b775ba5dd7)
|
26 |
|
27 |
+
## ELM Turbo Model Release (Llama 3.1 slices)
|
28 |
In this version, we employed our new, improved decomposable ELM techniques on a widely used open-source LLM, `meta-llama/Meta-Llama-3.1-8B-Instruct` (8B params) (check [Llama-license](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B/blob/main/LICENSE) for usage). After training, we generated three smaller slices with parameter counts ranging from 3B billion to 6B billion.
|
29 |
|
30 |
- [Section 1.](https://huggingface.co/slicexai/Llama3.1-elm-turbo-4B-instruct#1-run-elm-turbo-models-with-huggingface-transformers-library) π instructions to run ELM-Turbo with the Huggingface Transformers library.
|