YorkieOH10's picture
Update README.md
41b6f9d verified
---
language: en
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
inference: false
---
# Model Description
This is the second generation of the original Shinen made by Mr. Seeker. The full dataset consists of 6 different sources, all surrounding the "Adult" theme. The name "Erebus" comes from the greek mythology, also named "darkness". This is in line with Shin'en, or "deep abyss". For inquiries, please contact the KoboldAI community. **Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content**.
# YorkieOH10/GPT-NeoX-20B-Erebus-Q8_0-GGUF
This model was converted to GGUF format from [`KoboldAI/GPT-NeoX-20B-Erebus`](https://huggingface.co/KoboldAI/GPT-NeoX-20B-Erebus) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/KoboldAI/GPT-NeoX-20B-Erebus) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo YorkieOH10/GPT-NeoX-20B-Erebus-Q8_0-GGUF --model gpt-neox-20b-erebus.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo YorkieOH10/GPT-NeoX-20B-Erebus-Q8_0-GGUF --model gpt-neox-20b-erebus.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m gpt-neox-20b-erebus.Q8_0.gguf -n 128
```