--- language: - en - fr license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft base_model: unsloth/mistral-7b-v0.3 datasets: - jpacifico/French-Alpaca-dataset-Instruct-110K --- # Uploaded model - **Developed by:** AdrienB134 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-v0.3 This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth) # How to use ```python from unsloth import FastLanguageModel import torch max_seq_length = 32_768 # Choose any! We auto support RoPE Scaling internally! dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+ load_in_4bit = False # Use 4bit quantization to reduce memory usage. Can be True. model, tokenizer = FastLanguageModel.from_pretrained( model_name = "AdrienB134/French-Alpaca-Mistral-7B-v0.3", max_seq_length = max_seq_length, dtype = dtype, load_in_4bit = load_in_4bit, # token = "hf_...", # use one if using gated models like meta-llama/Llama-2-7b-hf ) alpaca_prompt = """Ci-dessous tu trouveras une instruction qui décrit une tâche, accompagnée d'un contexte qui donne plus d'informations. Ecrit une réponse appropriée à l'instruction. ### Instruction: {} ### Contexte: {} ### Response: {}""" FastLanguageModel.for_inference(model) # Enable native 2x faster inference inputs = tokenizer( [ alpaca_prompt.format( "Continue la série de fibonacci.", # instruction "1, 1, 2, 3, 5, 8", # contexte "", # output - leave this blank for generation! ) ], return_tensors = "pt").to("cuda") from transformers import TextStreamer text_streamer = TextStreamer(tokenizer) _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128) ```