Edit model card

πŸ¦™πŸ§  Miniplatypus-7b

This is a Llama-2-7b-chat model fine-tuned using QLoRA (4-bit precision) on the mlabonne/guanaco-llama2-1k dataset, which is a subset of the garage-bAInd/Open-Platypus.

πŸ”§ Training

It was trained on a Google Colab notebook with a T4 GPU. It is mainly designed for educational purposes, not for inference.

πŸ’» Usage

# pip install transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "mlabonne/llama-2-7b-miniplatypus"
prompt = "What is a large language model?"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

sequences = pipeline(
    f'<s>[INST] {prompt} [/INST]',
    do_sample=True,
    top_k=10,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
    max_length=200,
)
for seq in sequences:
    print(f"Result: {seq['generated_text']}")
Downloads last month
11
Safetensors
Model size
6.74B params
Tensor type
FP16
Β·

Dataset used to train mlabonne/llama-2-7b-miniplatypus