|
--- |
|
base_model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2 |
|
license: llama3.1 |
|
tags: |
|
- llama-cpp |
|
- gguf-my-repo |
|
--- |
|
|
|
# Triangle104/Llama-3.1-8B-ArliAI-RPMax-v1.2-Q4_K_M-GGUF |
|
This model was converted to GGUF format from [`ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2`](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. |
|
Refer to the [original model card](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2) for more details on the model. |
|
|
|
--- |
|
Model details: |
|
- |
|
Llama-3.1-8B-ArliAI-RPMax-v1.2 |
|
|
|
===================================== |
|
|
|
RPMax is a series of models that are trained on a diverse set of curated creative writing and RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive by making sure no two entries in the dataset have repeated characters or situations, which makes sure the model does not latch on to a certain personality and be capable of understanding and acting appropriately to any characters or situations. |
|
|
|
Early tests by users mentioned that these models does not feel like any other RP models, having a different style and generally doesn't feel in-bred. |
|
|
|
You can access the model at https://arliai.com and ask questions at https://www.reddit.com/r/ArliAI/ |
|
|
|
We also have a models ranking page at https://www.arliai.com/models-ranking |
|
|
|
Ask questions in our new Discord Server! https://discord.com/invite/t75KbPgwhk |
|
Model Description |
|
|
|
Llama-3.1-8B-ArliAI-RPMax-v1.2 is a variant of the Meta-Llama-3.1-8B model. |
|
|
|
v1.2 update is a retrain using an incremental improvement of the RPMax dataset which dedups the dataset even more and better filtering to cutout irrelevant description text that came from card sharing sites. |
|
Specs |
|
|
|
Context Length: 128K |
|
Parameters: 8B |
|
|
|
Training Details |
|
|
|
Sequence Length: 8192 |
|
Training Duration: Approximately 1 day on 2x3090Ti |
|
Epochs: 1 epoch training for minimized repetition sickness |
|
LORA: 64-rank 128-alpha, resulting in ~2% trainable weights |
|
Learning Rate: 0.00001 |
|
Gradient accumulation: Very low 32 for better learning. |
|
|
|
Quantization |
|
|
|
The model is available in quantized formats: |
|
|
|
We recommend using full weights or GPTQ |
|
|
|
FP16: https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2 |
|
GGUF: https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2-GGUF |
|
|
|
Suggested Prompt Format |
|
|
|
Llama 3 Instruct Format |
|
|
|
Example: |
|
|
|
<|begin_of_text|><|start_header_id|>system<|end_header_id|> |
|
|
|
You are [character]. You have a personality of [personality description]. [Describe scenario]<|eot_id|><|start_header_id|>user<|end_header_id|> |
|
|
|
{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> |
|
|
|
{{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|> |
|
|
|
{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> |
|
|
|
--- |
|
## Use with llama.cpp |
|
Install llama.cpp through brew (works on Mac and Linux) |
|
|
|
```bash |
|
brew install llama.cpp |
|
|
|
``` |
|
Invoke the llama.cpp server or the CLI. |
|
|
|
### CLI: |
|
```bash |
|
llama-cli --hf-repo Triangle104/Llama-3.1-8B-ArliAI-RPMax-v1.2-Q4_K_M-GGUF --hf-file llama-3.1-8b-arliai-rpmax-v1.2-q4_k_m.gguf -p "The meaning to life and the universe is" |
|
``` |
|
|
|
### Server: |
|
```bash |
|
llama-server --hf-repo Triangle104/Llama-3.1-8B-ArliAI-RPMax-v1.2-Q4_K_M-GGUF --hf-file llama-3.1-8b-arliai-rpmax-v1.2-q4_k_m.gguf -c 2048 |
|
``` |
|
|
|
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. |
|
|
|
Step 1: Clone llama.cpp from GitHub. |
|
``` |
|
git clone https://github.com/ggerganov/llama.cpp |
|
``` |
|
|
|
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). |
|
``` |
|
cd llama.cpp && LLAMA_CURL=1 make |
|
``` |
|
|
|
Step 3: Run inference through the main binary. |
|
``` |
|
./llama-cli --hf-repo Triangle104/Llama-3.1-8B-ArliAI-RPMax-v1.2-Q4_K_M-GGUF --hf-file llama-3.1-8b-arliai-rpmax-v1.2-q4_k_m.gguf -p "The meaning to life and the universe is" |
|
``` |
|
or |
|
``` |
|
./llama-server --hf-repo Triangle104/Llama-3.1-8B-ArliAI-RPMax-v1.2-Q4_K_M-GGUF --hf-file llama-3.1-8b-arliai-rpmax-v1.2-q4_k_m.gguf -c 2048 |
|
``` |
|
|