KULLM project

  • base model: mistralai/Mistral-7B-Instruct-v0.2

datasets

  • KULLM dataset
  • hand-crafted instruction data

Implementation Code

from transformers import (
    AutoModelForCausalLM,
    AutoTokenizer
)
import torch

repo = "heavytail/kullm-mistral"
model = AutoModelForCausalLM.from_pretrained(
        repo,
        torch_dtype=torch.float16,
        device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)

Initial upload: 2024/01/28 20:30

Downloads last month
2,698
Safetensors
Model size
7.24B params
Tensor type
FP16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for heavytail/kullm-mistral

Quantizations
1 model

Spaces using heavytail/kullm-mistral 6