--- base_model: mwitiderrick/open_llama_3b_code_instruct_0.1 datasets: - mwitiderrick/AlpacaCode inference: true model_type: llama prompt_template: | [INST] {prompt} [/INST] created_by: mwitiderrick tags: - transformers license: apache-2.0 language: - en library_name: transformers pipeline_tag: text-generation model-index: - name: mwitiderrick/open_llama_3b_instruct_v_0.2 results: - task: type: text-generation dataset: name: hellaswag type: hellaswag metrics: - name: hellaswag(0-Shot) type: hellaswag (0-Shot) value: 0. - task: type: text-generation dataset: name: winogrande type: winogrande metrics: - name: winogrande(0-Shot) type: winogrande (0-Shot) value: 0. - task: type: text-generation dataset: name: arc_challenge type: arc_challenge metrics: - name: arc_challenge(0-Shot) type: arc_challenge (0-Shot) value: 0. source: name: open_llama_3b_instruct_v_0.2 model card url: https://huggingface.co/mwitiderrick/open_llama_3b_instruct_v_0.2 --- # OpenLLaMA Code Instruct: An Open Reproduction of LLaMA This is an [OpenLlama model Code Instruct](https://huggingface.co/mwitiderrick/open_llama_3b_code_instruct_0.1) that has been fine-tuned on 1 epoch of the [Glaive Assistsnt](https://huggingface.co/datasets/mwitiderrick/glaive-code-assistant) dataset. ## Prompt Template ``` [INST] {{ user_msg }} [/INST] ``` ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM,pipeline tokenizer = AutoTokenizer.from_pretrained("mwitiderrick/open_llama_3b_glaive_assistant_v0.1") model = AutoModelForCausalLM.from_pretrained("mwitiderrick/open_llama_3b_glaive_assistant_v0.1") query = "Write a quick sort algorithm in Python" text_gen = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200) output = text_gen(f"[INST]{query}[/INST]") print(output[0]['generated_text']) """ """ ``` ## Metrics ``` ```