You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Kushtrim/Phi-3-mini-4k-instruct-sq

Model Overview

The Kushtrim/Phi-3-mini-4k-instruct-sq is a fine-tuned version of the Phi-3-mini-4k-instruct model, specifically tailored for Albanian language tasks. It has a context length of up to 4,000 tokens, making it suitable for a variety of applications requiring strong reasoning and high-quality outputs in Albanian.

Model Details

  • Model Name: Kushtrim/Phi-3-mini-4k-instruct-sq
  • Base Model: Phi-3-Mini-4K-Instruct
  • Context Length: 4,000 tokens
  • Language: Albanian
  • License: MIT License

Limitations

  • Representation of Harms & Stereotypes: Potential for biased outputs reflecting real-world societal biases.
  • Inappropriate or Offensive Content: Risk of generating content that may be offensive or inappropriate in certain contexts.
  • Information Reliability: Possibility of producing inaccurate or outdated information.
  • Dataset Size: The Albanian dataset used for fine-tuning was not very large, which may affect the model's performance and coverage.

Responsible AI Considerations

Developers using this model should:

  • Evaluate and mitigate risks related to accuracy, safety, and fairness.
  • Ensure compliance with applicable laws and regulations.
  • Implement additional safeguards for high-risk scenarios and sensitive contexts.
  • Inform end-users that they are interacting with an AI system.
  • Use feedback mechanisms and contextual information grounding techniques (RAG) to enhance output reliability.

How to Use

!pip3 install -U transformers peft accelerate bitsandbytes

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch

hf_token = "hf_...." # 

torch.random.manual_seed(0)

model = AutoModelForCausalLM.from_pretrained(
    "Kushtrim/Phi-3-mini-4k-instruct-sq",
    device_map="cuda",
    torch_dtype="auto",
    trust_remote_code=True,
    token=hf_token,
)

tokenizer = AutoTokenizer.from_pretrained("Kushtrim/Phi-3-mini-4k-instruct-sq", token=hf_token)

messages = [
    {"role": "system", "content": "Je një asistent inteligjent shumë i dobishëm."},
    {"role": "user", "content": "Identifiko emrat e personave në këtë artikull 'Majlinda Kelmendi (lindi më 9 maj 1991), është një xhudiste shqiptare nga Peja, Kosovë.'"},
]

pipe = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
)

generation_args = {
    "max_new_tokens": 1024,
    "return_full_text": False,
    "temperature": 0.7,
    "do_sample": True,
}

output = pipe(messages, **generation_args)
print(output[0]['generated_text'])

Acknowledgements

This model is built upon the Phi-3-Mini-4K-Instruct by leveraging its robust capabilities and further fine-tuning it for Albanian language tasks. Special thanks to the developers and researchers who contributed to the original Phi-3 models.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including Kushtrim/Phi-3-mini-4k-instruct-sq