|
--- |
|
datasets: |
|
- Open-Orca/OpenOrca |
|
inference: false |
|
model_type: replit |
|
--- |
|
|
|
<!-- header start --> |
|
<div style="width: 100%;"> |
|
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> |
|
</div> |
|
<div style="display: flex; justify-content: space-between; width: 100%;"> |
|
<div style="display: flex; flex-direction: column; align-items: flex-start;"> |
|
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> |
|
</div> |
|
<div style="display: flex; flex-direction: column; align-items: flex-end;"> |
|
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> |
|
</div> |
|
</div> |
|
<!-- header end --> |
|
|
|
# Matorus's Replit OpenOrca GGML |
|
|
|
These files are Replit GGML format model files for [Matorus's Replit OpenOrca](https://huggingface.co/matorus/replit-openorca). |
|
|
|
Please note that these GGMLs are **not compatible with llama.cpp, text-generation-webui or llama-cpp-python**. Please see below for a list of tools that work with this GGML model. |
|
|
|
These files were quantised using hardware kindly provided by [Latitude.sh](https://www.latitude.sh/accelerate). |
|
|
|
## Repositories available |
|
|
|
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/replit-openorca-GGML) |
|
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/matorus/replit-openorca) |
|
|
|
## Prompt template: Unknown |
|
|
|
``` |
|
{prompt} |
|
``` |
|
|
|
<!-- compatibility_ggml start --> |
|
## Compatibilty |
|
|
|
These files are **not** compatible with llama.cpp, text-generation-webui or llama-cpp-python. |
|
|
|
They can be used with: |
|
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful inference engine based on llama.cpp with full GPU acceleration and good UI. |
|
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI for GGML inference on Windows and macOS. |
|
* [LoLLMs-WebUI](https://github.com/ParisNeo/LoLLMs-WebUI) a web UI which supports nearly every backend out there. Use ctransformers backend for support for this model. |
|
* [ctransformers](https://github.com/marella/ctransformers): for use in Python code, including LangChain support. |
|
* [rustformers' llm](https://github.com/rustformers/llm) |
|
* The example `replit` binary provided with [ggml](https://github.com/ggerganov/ggml) |
|
|
|
As other options become available I will endeavour to update them here (do let me know in the Community tab if I've missed something!) |
|
|
|
## Tutorial for using LoLLMs-WebUI: |
|
|
|
* [Video tutorial, by LoLLMs-WebUI's author **ParisNeo**](https://youtu.be/vBU1b5n0GMU) |
|
<!-- compatibility_ggml end --> |
|
|
|
## Provided files |
|
| Name | Quant method | Bits | Size | Max RAM required | Use case | |
|
| ---- | ---- | ---- | ---- | ---- | ----- | |
|
| replit-openorca.ggmlv1.q4_0.bin | q4_0 | 4 | 1.46 GB| 3.96 GB | 4-bit. | |
|
| replit-openorca.ggmlv1.q4_1.bin | q4_1 | 4 | 1.63 GB| 4.13 GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | |
|
| replit-openorca.ggmlv1.q5_0.bin | q5_0 | 5 | 1.79 GB| 4.29 GB | 5-bit. Higher accuracy, higher resource usage and slower inference. | |
|
| replit-openorca.ggmlv1.q5_1.bin | q5_1 | 5 | 1.95 GB| 4.45 GB | 5-bit. Even higher accuracy, resource usage and slower inference. | |
|
| replit-openorca.ggmlv1.q8_0.bin | q8_0 | 8 | 2.76 GB| 5.26 GB | 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | |
|
|
|
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. |
|
|
|
<!-- footer start --> |
|
## Discord |
|
|
|
For further support, and discussions on these models and AI in general, join us at: |
|
|
|
[TheBloke AI's Discord server](https://discord.gg/theblokeai) |
|
|
|
## Thanks, and how to contribute. |
|
|
|
Thanks to the [chirper.ai](https://chirper.ai) team! |
|
|
|
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. |
|
|
|
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. |
|
|
|
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. |
|
|
|
* Patreon: https://patreon.com/TheBlokeAI |
|
* Ko-Fi: https://ko-fi.com/TheBlokeAI |
|
|
|
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz. |
|
|
|
**Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang. |
|
|
|
Thank you to all my generous patrons and donaters! |
|
|
|
<!-- footer end --> |
|
|
|
# Original model card: Matorus's Replit OpenOrca |
|
|
|
[replit/replit-code-v1-3b](https://huggingface.co/replit/replit-code-v1-3b) finetuned on [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca). |
|
|