kongostral / README.md
Svngoku's picture
Update README.md
835b8fc verified
|
raw
history blame
2.54 kB
metadata
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
language:
  - en
  - kg
license: apache-2.0
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - mistral
  - trl
  - sft
datasets:
  - wikimedia/wikipedia
  - Svngoku/xP3x-Kongo

Kongostral

Kongostral is a continious pretrained version of the mistral model (Mistral v3) on Kikongo Wikipedia Corpus and fine-tuned on Kikongo Translated text from xP3x using the alcapa format. The goal of this model is to produce a SOTA model who can easily predict the next token on Kikongo sentences and produce instruction base text generation.

  • Developed by: Svngoku
  • License: apache-2.0
  • Finetuned from model : unsloth/mistral-7b-v0.3-bnb-4bit

Inference with Unsloth

FastLanguageModel.for_inference(model) # Enable native 2x faster inference
inputs = tokenizer([
    alpaca_prompt.format(
        #"", # instruction
        "Inki bima ke salaka ba gâteau ya pomme ya nsungi ?", # instruction
        "", # output - leave this blank for generation!
    )],
    return_tensors="pt").to("cuda")

outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)
tokenizer.batch_decode(outputs)

Inference with Transformers 🤗

!pip install -q -U bitsandbytes
!pip install -q -U git+https://github.com/huggingface/transformers.git
!pip install -q -U git+https://github.com/huggingface/peft.git
!pip install -q -U git+https://github.com/huggingface/accelerate.git
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
import torch

quantization_config = BitsAndBytesConfig(
  load_in_4bit=True,
  bnb_4bit_compute_dtype=torch.bfloat16
)

tokenizer = AutoTokenizer.from_pretrained("Svngoku/kongostral")
model = AutoModelForCausalLM.from_pretrained("Svngoku/kongostral", quantization_config=quantization_config)

prompt = "Inki kele Nsangu ya kisika yai ?"

model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda")

generated_ids = model.generate(**model_inputs, max_new_tokens=500, do_sample=True)
tokenizer.batch_decode(generated_ids)[0]

Observation

The model may produce results that are not accurate as requested by the user. There is still work to be done to align and get more accurate results.

Note

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.