# BrokenSoul/llama-2-7b-miniguanaco This is a test model finetuned for learning. #### How to use ```python from transformers import ( AutoTokenizer, pipeline ) model_name = "BrokenSoul/llama-2-7b-miniguanaco" tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = "right" prompt = "What is a large language model?" pipe = pipeline(task="text-generation", model=model_name, tokenizer=tokenizer, max_length=200) result = pipe(f"[INST] {prompt} [/INST]") print(result[0]['generated_text']) ``` #### Training data [mlabonne/guanaco-llama2-1k](https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k) dataset. #### Training procedure It was trained following the [maximelabonne](https://colab.research.google.com/github/mlabonne/llm-course/blob/main/Fine_tune_Llama_2_in_Google_Colab.ipynb#scrollTo=ib_We3NLtj2E)'s guide. all credits for him. --- license: apache-2.0 language: - en pipeline_tag: text-generation ---