KULLM project

  • base model: mistralai/Mistral-7B-Instruct-v0.2

datasets

  • KULLM dataset
  • hand-crafted instruction data

Implementation Code

from transformers import (
    AutoModelForCausalLM,
    AutoTokenizer
)
import torch

repo = "heavytail/kullm-mistral-S"
model = AutoModelForCausalLM.from_pretrained(
        repo,
        torch_dtype=torch.float16,
        device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)

Initial upload: 2024/01/28 21:00

Downloads last month
2,267
Safetensors
Model size
7.24B params
Tensor type
FP16
Β·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for heavytail/kullm-mistral-S

Quantizations
1 model

Spaces using heavytail/kullm-mistral-S 6