inference: false
license: other
model_creator: totally-not-an-llm
model_link: https://huggingface.co/totally-not-an-llm/AlpacaCielo-13b
model_name: AlpacaCielo 13B
model_type: llama
quantized_by: TheBloke
AlpacaCielo 13B - GPTQ
- Model creator: totally-not-an-llm
- Original model: AlpacaCielo 13B
Description
This repo contains GPTQ model files for totally-not-an-llm's AlpacaCielo 13B.
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
Repositories available
- GPTQ models for GPU inference, with multiple quantisation parameter options.
- 2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference
- totally-not-an-llm's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions
Prompt template: Guanaco
### Human: {prompt}
### Assistant:
Provided files
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
---|---|---|---|---|---|---|---|
main | 4 | 128 | False | 7.26 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
gptq-4bit-32g-actorder_True | 4 | 32 | True | 8.00 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
gptq-4bit-64g-actorder_True | 4 | 64 | True | 7.51 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
gptq-4bit-128g-actorder_True | 4 | 128 | True | 7.26 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
gptq-8bit--1g-actorder_True | 8 | None | True | Processing, coming soon | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
gptq-8bit-128g-actorder_False | 8 | 128 | False | Processing, coming soon | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
gptq-8bit-128g-actorder_True | 8 | 128 | True | Processing, coming soon | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
gptq-8bit-64g-actorder_True | 8 | 64 | True | Processing, coming soon | False | AutoGPTQ | 8-bit, with group size 64g and Act Order for maximum inference quality. Poor AutoGPTQ CUDA speed. |
How to download from branches
- In text-generation-webui, you can add
:branch
to the end of the download name, egTheBloke/AlpacaCielo-13B-GPTQ:gptq-4bit-32g-actorder_True
- With Git, you can clone a branch with:
git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/AlpacaCielo-13B-GPTQ`
- In Python Transformers code, the branch is the
revision
parameter; see below.
How to easily download and use this model in text-generation-webui.
Please make sure you're using the latest version of text-generation-webui.
It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
- Click the Model tab.
- Under Download custom model or LoRA, enter
TheBloke/AlpacaCielo-13B-GPTQ
.
- To download from a specific branch, enter for example
TheBloke/AlpacaCielo-13B-GPTQ:gptq-4bit-32g-actorder_True
- see Provided Files above for the list of branches for each option.
- Click Download.
- The model will start downloading. Once it's finished it will say "Done"
- In the top left, click the refresh icon next to Model.
- In the Model dropdown, choose the model you just downloaded:
AlpacaCielo-13B-GPTQ
- The model will automatically load, and is now ready for use!
- If you want any custom settings, set them and then click Save settings for this model followed by Reload the Model in the top right.
- Note that you do not need to set GPTQ parameters any more. These are set automatically from the file
quantize_config.json
.
- Once you're ready, click the Text Generation tab and enter a prompt to get started!
How to use this GPTQ model from Python code
First make sure you have AutoGPTQ installed:
GITHUB_ACTIONS=true pip install auto-gptq
Then try the following example code:
from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
model_name_or_path = "TheBloke/AlpacaCielo-13B-GPTQ"
model_basename = "gptq_model-4bit-128g"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=False,
device="cuda:0",
use_triton=use_triton,
quantize_config=None)
"""
To download from a specific branch, use the revision parameter, as in this example:
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
revision="gptq-4bit-32g-actorder_True",
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=False,
device="cuda:0",
quantize_config=None)
"""
prompt = "Tell me about AI"
prompt_template=f'''### Human: {prompt}
### Assistant:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
logging.set_verbosity(logging.CRITICAL)
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
Compatibility
The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
Discord
For further support, and discussions on these models and AI in general, join us at:
Thanks, and how to contribute.
Thanks to the chirper.ai team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
- Patreon: https://patreon.com/TheBlokeAI
- Ko-Fi: https://ko-fi.com/TheBlokeAI
Special thanks to: Luke from CarbonQuill, Aemon Algiz.
Patreon special mentions: Slarti, Chadd, John Detwiler, Pieter, zynix, K, Mano Prime, ReadyPlayerEmma, Ai Maven, Leonard Tan, Edmond Seymore, Joseph William Delisle, Luke @flexchar, Fred von Graf, Viktor Bowallius, Rishabh Srivastava, Nikolai Manek, Matthew Berman, Johann-Peter Hartmann, ya boyyy, Greatston Gnanesh, Femi Adebogun, Talal Aujan, Jonathan Leane, terasurfer, David Flickinger, William Sang, Ajan Kanaga, Vadim, Artur Olbinski, Raven Klaugh, Michael Levine, Oscar Rangel, Randy H, Cory Kujawski, RoA, Dave, Alex, Alexandros Triantafyllidis, Fen Risland, Eugene Pentland, vamX, Elle, Nathan LeClaire, Khalefa Al-Ahmad, Rainer Wilmers, subjectnull, Junyu Yang, Daniel P. Andersen, SuperWojo, LangChain4j, Mandus, Kalila, Illia Dulskyi, Trenton Dambrowitz, Asp the Wyvern, Derek Yates, Jeffrey Morgan, Deep Realms, Imad Khwaja, Pyrater, Preetika Verma, biorpg, Gabriel Tamborski, Stephen Murray, Spiking Neurons AB, Iucharbius, Chris Smitley, Willem Michiel, Luke Pendergrass, Sebastain Graf, senxiiz, Will Dee, Space Cruiser, Karl Bernard, Clay Pascal, Lone Striker, transmissions 11, webtim, WelcomeToTheClub, Sam, theTransient, Pierre Kircher, chris gileta, John Villwock, Sean Connelly, Willian Hasse
Thank you to all my generous patrons and donaters!
Original model card: totally-not-an-llm's AlpacaCielo 13B
AlpacaCielo-13b
Disclaimer: The model might have a tokenizer issue, but functions. Updates to come.
AlpacaCielo-13b is a llama-2 based model designed for creative tasks, such as storytelling and roleplay, while still doing well with other chatbot purposes. It is a triple model merge of Nous-Hermes + Guanaco + Storywriter. While it is mostly "uncensored", it still inherits some alignment from Guanaco.
Prompt format is:
### Human: {prompt}
### Assistant:
Thanks to previous similar models such as Alpacino, Alpasta, and AlpacaDente for inspiring the creation of this model. Thanks also to the creators of the models involved in the merge. Original models: