Edit model card

How to use:

from transformers import TextStreamer
from unsloth import FastLanguageModel
import torch

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "AdrienB134/French-Alpaca-Croissant-1.3B-Instruct",
    max_seq_length = 4096,
    dtype = None,
    load_in_4bit = True,
    fix_tokenizer = False,
)

alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{}

### Input:
{}

### Response:
{}"""

FastLanguageModel.for_inference(model) 

inputs = tokenizer(
[
    alpaca_prompt.format(
        "Continue la suite de Fibonnaci", # instruction
        "1, 1, 2, 3, 5, 8", # input
        "", # output - leave this blank for generation!
    )
], return_tensors = "pt").to("cuda")

text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128)

Uploaded model

  • Developed by: AdrienB134
  • License: MIT
  • Finetuned from model : croissantllm/CroissantLLMBase

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
1,031

Finetuned from

Dataset used to train AdrienB134/French-Alpaca-Croissant-1.3B-Instruct