--- inference: false license: cc datasets: - VMware/open-instruct-v1-oasst-dolly-hhrlhf language: - en library_name: transformers pipeline_tag: text-generation --- # blackmount8/open-llama-13B-open-instruct-ct2-int8_float16 Int8_float16 version of [VMware/open-llama-13b-open-instruct](https://huggingface.co/VMware/open-llama-13b-open-instruct), quantized using CTranslate2. ## VMware/open-llama-13B-open-instruct Instruction-tuned version of the fully trained Open LLama 13B model. The model is open for ``COMMERCIAL USE ``. `
` `` NOTE `` : The model was trained using the Alpaca prompt template `` NOTE `` : Fast tokenizer results in incorrect encoding, set the ``use_fast = False`` parameter, when instantiating the tokenizer ## License - ``Commercially Viable `` - Instruction dataset, [VMware/open-instruct-v1-oasst-dolly-hhrlhf](https://huggingface.co/datasets/VMware/open-instruct-v1-oasst-dolly-hhrlhf) is under cc-by-sa-3.0 - Language Model, ([openlm-research/open_llama_13b](https://huggingface.co/openlm-research/open_llama_13b)) is under apache-2.0 ## Nomenclature - Model : Open-llama - Model Size: 13B parameters - Dataset: Open-instruct-v1 (oasst, dolly, hhrlhf) ## Use in CTranslate2 ``` import ctranslate2 from transformers import AutoTokenizer model_name = "blackmount8/open-llama-13b-open-instruct-ct2-int8_float16" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False, padding_side="left", truncation_side="left") model = ctranslate2.Generator(model_name, device="auto", compute_type="int8_float16") input_text = ["What is the meaning of stonehenge?", "Hello mate!"] input_ids = tokenizer(input_text, return_tensors="pt", padding=True, truncation=True).input_ids input_tokens = [tokenizer.convert_ids_to_tokens(ele) for ele in input_ids] outputs = model.generate_batch(input_tokens, max_length=128) output_tokens = [ ele.sequences_ids[0] for ele in outputs ] output = tokenizer.batch_decode(output_tokens) print(output) ```