Edit model card

cgato/L3-TheSpice-8b-v0.8.3 AWQ

Model Info

Now not overtrained and with the tokenizer fix to base llama3. Trained for 3 epochs.

The latest TheSpice, dipped in Mama Liz's LimaRP Oil. I've focused on making the model more flexible and provide a more unique experience. I'm still working on cleaning up my dataset, but I've shrunken it down a lot to focus on a "less is more" approach. This is ultimate a return to form of the way I used to train Thespis, with more of a focus on a small hand edited dataset.

  • Capybara
  • Claude Multiround 30k
  • Augmental
  • ToxicQA
  • Yahoo Answers
  • Airoboros 3.1
  • LimaRP

How to use

Install the necessary packages

pip install --upgrade autoawq autoawq-kernels

Example Python code

from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer

model_path = "solidrust/L3-TheSpice-8b-v0.8.3-AWQ"
system_message = "You are L3-TheSpice-8b-v0.8.3, incarnated as a powerful AI. You were created by cgato."

# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
                                          fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
                                          trust_remote_code=True)
streamer = TextStreamer(tokenizer,
                        skip_prompt=True,
                        skip_special_tokens=True)

# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""

prompt = "You're standing on the surface of the Earth. "\
        "You walk one mile south, one mile west and one mile north. "\
        "You end up exactly where you started. Where are you?"

tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
                  return_tensors='pt').input_ids.cuda()

# Generate output
generation_output = model.generate(tokens,
                                  streamer=streamer,
                                  max_new_tokens=512)

About AWQ

AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.

AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.

It is supported by:

Downloads last month
4
Safetensors
Model size
1.98B params
Tensor type
I32
·
FP16
·
Inference API (serverless) has been turned off for this model.

Collection including solidrust/L3-TheSpice-8b-v0.8.3-AWQ