|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- PrimeIntellect/fineweb-edu |
|
- PrimeIntellect/fineweb |
|
- PrimeIntellect/StackV1-popular |
|
- mlfoundations/dclm-baseline-1.0-parquet |
|
- open-web-math/open-web-math |
|
- arcee-ai/EvolKit-75K |
|
- arcee-ai/Llama-405B-Logits |
|
- arcee-ai/The-Tomb |
|
- mlabonne/open-perfectblend-fixed |
|
- microsoft/orca-agentinstruct-1M-v1-cleaned |
|
- Post-training-Data-Flywheel/AutoIF-instruct-61k-with-funcs |
|
- Team-ACE/ToolACE |
|
- Synthia-coder |
|
- ServiceNow-AI/M2Lingual |
|
- AI-MO/NuminaMath-TIR |
|
- allenai/tulu-3-sft-personas-code |
|
- allenai/tulu-3-sft-personas-math |
|
- allenai/tulu-3-sft-personas-math-grade |
|
- allenai/tulu-3-sft-personas-algebra |
|
language: |
|
- en |
|
base_model: PrimeIntellect/INTELLECT-1-Instruct |
|
pipeline_tag: text-generation |
|
tags: |
|
- llama-cpp |
|
- gguf-my-repo |
|
--- |
|
|
|
# Triangle104/INTELLECT-1-Instruct-Q5_K_M-GGUF |
|
This model was converted to GGUF format from [`PrimeIntellect/INTELLECT-1-Instruct`](https://huggingface.co/PrimeIntellect/INTELLECT-1-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. |
|
Refer to the [original model card](https://huggingface.co/PrimeIntellect/INTELLECT-1-Instruct) for more details on the model. |
|
|
|
--- |
|
|
|
arcee-ai/Llama-405B-Logits |
|
arcee-ai/The-Tomb |
|
|
|
Instruction Following: |
|
- |
|
mlabonne/open-perfectblend-fixed (generalist capabilities) |
|
microsoft/orca-agentinstruct-1M-v1-cleaned (Chain-of-Thought) |
|
Post-training-Data-Flywheel/AutoIF-instruct-61k-with-funcs |
|
|
|
Domain-Specific: |
|
- |
|
Team-ACE/ToolACE (function calling) |
|
Synthia coder (programming) |
|
ServiceNow-AI/M2Lingual (multilingual) |
|
AI-MO/NuminaMath-TIR (mathematics) |
|
|
|
Tulu-3 Persona Datasets: |
|
- |
|
allenai/tulu-3-sft-personas-code |
|
allenai/tulu-3-sft-personas-math |
|
allenai/tulu-3-sft-personas-math-grade |
|
allenai/tulu-3-sft-personas-algebra |
|
|
|
Second, we execute 8 distinct Direct Preference Optimization (DPO) |
|
runs with various combinations of data sets to enhance specific |
|
performance metrics and align the model with human preferences. A key |
|
advantage in our post-training process was INTELLECT-1's use of the |
|
Llama-3 tokenizer, which allowed us to utilize logits from |
|
Llama-3.1-405B to heal and maintain precision during the post-training |
|
process via DistillKit. |
|
|
|
Finally, we performed 16 strategic merges between candidate models |
|
using MergeKit to create superior combined models that leverage the |
|
strengths of different training runs. During the post-training phase, we |
|
observed that when using a ChatML template without an explicit BOS |
|
(begin-of-sequence) token, the initial loss was approximately 15. |
|
However, when switching to the Llama 3.1 chat template, the loss for |
|
these trainings started much lower at approximately 1.1, indicating |
|
better alignment with the underlying Llama 3 tokenizer. |
|
|
|
The combination of these post-training techniques resulted in |
|
significant improvements in various benchmarks, particularly in |
|
knowledge retrieval, grade school math, instruction following and |
|
reasoning. |
|
|
|
Citations |
|
|
|
|
|
|
|
|
|
If you use this model in your research, please cite it as follows: |
|
|
|
@article{jaghouar2024intellect, |
|
title={INTELLECT-1 Technical Report.}, |
|
author={Jaghouar, Sami and Ong, Jack Min and Basra, Manveer and Obeid, Fares and Straube, Jannik and Keiblinger, Michael and Bakouch, Elie and Atkins, Lucas and Panahi, Maziyar and Goddard, Charles and Ryabinin, Max and Hagemann, Johannes}, |
|
journal={arXiv preprint}, |
|
year={2024} |
|
} |
|
|
|
--- |
|
## Use with llama.cpp |
|
Install llama.cpp through brew (works on Mac and Linux) |
|
|
|
```bash |
|
brew install llama.cpp |
|
|
|
``` |
|
Invoke the llama.cpp server or the CLI. |
|
|
|
### CLI: |
|
```bash |
|
llama-cli --hf-repo Triangle104/INTELLECT-1-Instruct-Q5_K_M-GGUF --hf-file intellect-1-instruct-q5_k_m.gguf -p "The meaning to life and the universe is" |
|
``` |
|
|
|
### Server: |
|
```bash |
|
llama-server --hf-repo Triangle104/INTELLECT-1-Instruct-Q5_K_M-GGUF --hf-file intellect-1-instruct-q5_k_m.gguf -c 2048 |
|
``` |
|
|
|
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. |
|
|
|
Step 1: Clone llama.cpp from GitHub. |
|
``` |
|
git clone https://github.com/ggerganov/llama.cpp |
|
``` |
|
|
|
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). |
|
``` |
|
cd llama.cpp && LLAMA_CURL=1 make |
|
``` |
|
|
|
Step 3: Run inference through the main binary. |
|
``` |
|
./llama-cli --hf-repo Triangle104/INTELLECT-1-Instruct-Q5_K_M-GGUF --hf-file intellect-1-instruct-q5_k_m.gguf -p "The meaning to life and the universe is" |
|
``` |
|
or |
|
``` |
|
./llama-server --hf-repo Triangle104/INTELLECT-1-Instruct-Q5_K_M-GGUF --hf-file intellect-1-instruct-q5_k_m.gguf -c 2048 |
|
``` |
|
|