Text Generation
Transformers
Safetensors
gemma
Eval Results
Inference Endpoints
text-generation-inference
Edit model card

Gemma Orchid 7b

image/webp

Built with Axolotl

This model is the second checkpoint of a future project. Its capable of function calling as well as having a strong base in communicational skills.

This model has been finetuned on roughly 80k samples so far.

Training

  • Time to complete: ~20 hours
  • Datasets: Thermostatic/flowers, Intel/orca_dpo_pairs, jondurbin/truthy-dpo-v0.1, glaiveai/glaive_function_calling_v2
  • Evaluation loss: 0.69
  • Method: LoRa
  • Prompt Format: ChatML

Thermostatic/flowers is a blend of open source model generations formatted in ShareGPT. It also includes all of capybara.

This model has been exposed to a wide variety of data. macadeliccc/gemma-function-calling-7b is suitable to finetune further with the dataset of your choosing.

Running the model on a CPU

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("macadeliccc/gemma-orchid-7b-dpo")
model = AutoModelForCausalLM.from_pretrained("macadeliccc/gemma-orchid-7b-dpo")

input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt")

outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))

Running the model on a single / multi GPU

# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("macadeliccc/gemma-orchid-7b-dpo")
model = AutoModelForCausalLM.from_pretrained("macadeliccc/gemma-orchid-7b-dpo", device_map="auto")

input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))

Running the model on a GPU using different precisions

  • Using torch.float16
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("macadeliccc/gemma-orchid-7b-dpo")
model = AutoModelForCausalLM.from_pretrained("macadeliccc/gemma-orchid-7b-dpo", device_map="auto", torch_dtype=torch.float16)

input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
  • Using torch.bfloat16
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("macadeliccc/gemma-orchid-7b-dpo")
model = AutoModelForCausalLM.from_pretrained("macadeliccc/gemma-orchid-7b-dpo", device_map="auto", torch_dtype=torch.bfloat16)

input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))

Quantized Versions through bitsandbytes

  • Using 8-bit precision (int8)
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig

quantization_config = BitsAndBytesConfig(load_in_8bit=True)

tokenizer = AutoTokenizer.from_pretrained("macadeliccc/gemma-orchid-7b-dpo")
model = AutoModelForCausalLM.from_pretrained("macadeliccc/gemma-orchid-7b-dpo", quantization_config=quantization_config)

input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
  • Using 4-bit precision
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig

quantization_config = BitsAndBytesConfig(load_in_4bit=True)

tokenizer = AutoTokenizer.from_pretrained("macadeliccc/gemma-orchid-7b-dpo")
model = AutoModelForCausalLM.from_pretrained("macadeliccc/gemma-orchid-7b-dpo", quantization_config=quantization_config)

input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))

Other optimizations

  • Flash Attention 2

First make sure to install flash-attn in your environment pip install flash-attn

model = AutoModelForCausalLM.from_pretrained(
    model_id, 
    torch_dtype=torch.float16, 
+   attn_implementation="flash_attention_2"
).to(0)

Inputs and outputs

  • Input: Text string, such as a question, a prompt, or a document to be summarized.
  • Output: Generated English-language text in response to the input, such as an answer to a question, or a summary of a document.

Evaluations

In progress

ExLlamaV2

Available here

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 64.37
AI2 Reasoning Challenge (25-Shot) 62.88
HellaSwag (10-Shot) 80.95
MMLU (5-Shot) 61.41
TruthfulQA (0-shot) 53.27
Winogrande (5-shot) 77.51
GSM8k (5-shot) 50.19
Downloads last month
3,632
Safetensors
Model size
8.54B params
Tensor type
FP16
·

Datasets used to train macadeliccc/gemma-orchid-7b-dpo

Evaluation results