Edit model card

Model Card for UCCIX-Llama2-13B

The UCCIX-Llama2-13B Large Language Model (LLM) is an Irish-English bilingual model, capables of understanding both languages and outperforms much larger models on Irish language tasks. The model is based on Llama 2-13B, with vocabulary expansion to include native Irish tokens, and additional continued pre-training on our collection of ~520M Irish tokens (available at https://huggingface.co/datasets/ReliableAI/Irish-Text-Collection).

UCCIX is a pioneering effort on the development of first-ever open-source Irish-based LLM. You can find more details at: https://arxiv.org/abs/2405.13010

Access the instruction-tuned version at https://huggingface.co/ReliableAI/UCCIX-Llama2-13B-Instruct, and interact with it live at: https://aine.chat

Run the model

Run the model with the transformers library:

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "ReliableAI/UCCIX-Llama2-13B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id,
                                             device_map="auto",
                                             dtype=torch.float16 # optional, load in 16-bit precision mode to reduce memory usage
)
model.eval()

input = "I love the environment."
input_ids = tokenizer(input, return_tensors="pt")["input_ids"]
generated_token_ids = model.generate(
    inputs=input_ids,
    max_new_tokens=100,
    do_sample=True,
    temperature=0.6,
    top_p=1,
)[0]
generated_text = tokenizer.decode(generated_token_ids)

Notice

As a pioneering effort, the UCCIX model does not have any moderation mechanisms at the moment. We anticipate collaborating with the community to refine the model's adherence to restrictions so that it can be implemented in settings that demand moderated outcomes.

Citation

@misc{tran2024uccix,
      title={UCCIX: Irish-eXcellence Large Language Model}, 
      author={Khanh-Tung Tran and Barry O'Sullivan and Hoang D. Nguyen},
      year={2024},
      eprint={2405.13010},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
569
Safetensors
Model size
13.1B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ReliableAI/UCCIX-Llama2-13B

Adapters
1 model
Finetunes
1 model
Quantizations
2 models

Dataset used to train ReliableAI/UCCIX-Llama2-13B