Text Generation
Transformers
English
Edit model card

blackmount8/open-llama-7B-open-instruct-ct2-float16

Float16 version of VMware/open-llama-7b-open-instruct, quantized using CTranslate2.

VMware/open-llama-7B-open-instruct

Instruction-tuned version of the fully trained Open LLama 7B model. The model is open for <b>COMMERCIAL USE </b>. <br>

<b> NOTE </b> : The model was trained using the Alpaca prompt template <b> NOTE </b> : Fast tokenizer results in incorrect encoding, set the use_fast = False parameter, when instantiating the tokenizer

License

Nomenclature

  • Model : Open-llama
  • Model Size: 7B parameters
  • Dataset: Open-instruct-v1 (oasst, dolly, hhrlhf)

Use in CTranslate2

import ctranslate2
from transformers import AutoTokenizer

model_name = "blackmount8/open-llama-7b-open-instruct-ct2-float16"

tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False, padding_side="left", truncation_side="left")
model = ctranslate2.Generator(model_name, device="auto", compute_type="float16")

input_text = ["What is the meaning of stonehenge?", "Hello mate!"]

input_ids = tokenizer(input_text, return_tensors="pt", padding=True, truncation=True).input_ids
input_tokens = [tokenizer.convert_ids_to_tokens(ele) for ele in input_ids]

outputs = model.generate_batch(input_tokens, max_length=128)

output_tokens = [
    ele.sequences_ids[0] for ele in outputs
]

output = tokenizer.batch_decode(output_tokens)

print(output)
Downloads last month
3
Inference Examples
Inference API (serverless) has been turned off for this model.

Dataset used to train blackmount8/open-llama-7b-open-instruct-ct2-float16