RichardErkhov commited on
Commit
dcc11aa
·
verified ·
1 Parent(s): 5182eaf

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +104 -0
README.md ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ UCCIX-Llama2-13B - GGUF
11
+ - Model creator: https://huggingface.co/ReliableAI/
12
+ - Original model: https://huggingface.co/ReliableAI/UCCIX-Llama2-13B/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [UCCIX-Llama2-13B.Q2_K.gguf](https://huggingface.co/RichardErkhov/ReliableAI_-_UCCIX-Llama2-13B-gguf/blob/main/UCCIX-Llama2-13B.Q2_K.gguf) | Q2_K | 4.54GB |
18
+ | [UCCIX-Llama2-13B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ReliableAI_-_UCCIX-Llama2-13B-gguf/blob/main/UCCIX-Llama2-13B.IQ3_XS.gguf) | IQ3_XS | 5.01GB |
19
+ | [UCCIX-Llama2-13B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ReliableAI_-_UCCIX-Llama2-13B-gguf/blob/main/UCCIX-Llama2-13B.IQ3_S.gguf) | IQ3_S | 5.29GB |
20
+ | [UCCIX-Llama2-13B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ReliableAI_-_UCCIX-Llama2-13B-gguf/blob/main/UCCIX-Llama2-13B.Q3_K_S.gguf) | Q3_K_S | 5.29GB |
21
+ | [UCCIX-Llama2-13B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ReliableAI_-_UCCIX-Llama2-13B-gguf/blob/main/UCCIX-Llama2-13B.IQ3_M.gguf) | IQ3_M | 5.59GB |
22
+ | [UCCIX-Llama2-13B.Q3_K.gguf](https://huggingface.co/RichardErkhov/ReliableAI_-_UCCIX-Llama2-13B-gguf/blob/main/UCCIX-Llama2-13B.Q3_K.gguf) | Q3_K | 5.92GB |
23
+ | [UCCIX-Llama2-13B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ReliableAI_-_UCCIX-Llama2-13B-gguf/blob/main/UCCIX-Llama2-13B.Q3_K_M.gguf) | Q3_K_M | 5.92GB |
24
+ | [UCCIX-Llama2-13B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ReliableAI_-_UCCIX-Llama2-13B-gguf/blob/main/UCCIX-Llama2-13B.Q3_K_L.gguf) | Q3_K_L | 6.47GB |
25
+ | [UCCIX-Llama2-13B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ReliableAI_-_UCCIX-Llama2-13B-gguf/blob/main/UCCIX-Llama2-13B.IQ4_XS.gguf) | IQ4_XS | 6.56GB |
26
+ | [UCCIX-Llama2-13B.Q4_0.gguf](https://huggingface.co/RichardErkhov/ReliableAI_-_UCCIX-Llama2-13B-gguf/blob/main/UCCIX-Llama2-13B.Q4_0.gguf) | Q4_0 | 6.88GB |
27
+ | [UCCIX-Llama2-13B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ReliableAI_-_UCCIX-Llama2-13B-gguf/blob/main/UCCIX-Llama2-13B.IQ4_NL.gguf) | IQ4_NL | 6.92GB |
28
+ | [UCCIX-Llama2-13B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ReliableAI_-_UCCIX-Llama2-13B-gguf/blob/main/UCCIX-Llama2-13B.Q4_K_S.gguf) | Q4_K_S | 6.94GB |
29
+ | [UCCIX-Llama2-13B.Q4_K.gguf](https://huggingface.co/RichardErkhov/ReliableAI_-_UCCIX-Llama2-13B-gguf/blob/main/UCCIX-Llama2-13B.Q4_K.gguf) | Q4_K | 7.35GB |
30
+ | [UCCIX-Llama2-13B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ReliableAI_-_UCCIX-Llama2-13B-gguf/blob/main/UCCIX-Llama2-13B.Q4_K_M.gguf) | Q4_K_M | 7.35GB |
31
+ | [UCCIX-Llama2-13B.Q4_1.gguf](https://huggingface.co/RichardErkhov/ReliableAI_-_UCCIX-Llama2-13B-gguf/blob/main/UCCIX-Llama2-13B.Q4_1.gguf) | Q4_1 | 7.63GB |
32
+ | [UCCIX-Llama2-13B.Q5_0.gguf](https://huggingface.co/RichardErkhov/ReliableAI_-_UCCIX-Llama2-13B-gguf/blob/main/UCCIX-Llama2-13B.Q5_0.gguf) | Q5_0 | 8.38GB |
33
+ | [UCCIX-Llama2-13B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ReliableAI_-_UCCIX-Llama2-13B-gguf/blob/main/UCCIX-Llama2-13B.Q5_K_S.gguf) | Q5_K_S | 8.38GB |
34
+ | [UCCIX-Llama2-13B.Q5_K.gguf](https://huggingface.co/RichardErkhov/ReliableAI_-_UCCIX-Llama2-13B-gguf/blob/main/UCCIX-Llama2-13B.Q5_K.gguf) | Q5_K | 8.62GB |
35
+ | [UCCIX-Llama2-13B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ReliableAI_-_UCCIX-Llama2-13B-gguf/blob/main/UCCIX-Llama2-13B.Q5_K_M.gguf) | Q5_K_M | 8.62GB |
36
+ | [UCCIX-Llama2-13B.Q5_1.gguf](https://huggingface.co/RichardErkhov/ReliableAI_-_UCCIX-Llama2-13B-gguf/blob/main/UCCIX-Llama2-13B.Q5_1.gguf) | Q5_1 | 9.13GB |
37
+ | [UCCIX-Llama2-13B.Q6_K.gguf](https://huggingface.co/RichardErkhov/ReliableAI_-_UCCIX-Llama2-13B-gguf/blob/main/UCCIX-Llama2-13B.Q6_K.gguf) | Q6_K | 9.97GB |
38
+ | [UCCIX-Llama2-13B.Q8_0.gguf](https://huggingface.co/RichardErkhov/ReliableAI_-_UCCIX-Llama2-13B-gguf/blob/main/UCCIX-Llama2-13B.Q8_0.gguf) | Q8_0 | 12.92GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ license: apache-2.0
46
+ datasets:
47
+ - ReliableAI/Irish-Text-Collection
48
+ language:
49
+ - en
50
+ - ga
51
+ ---
52
+
53
+ # Model Card for UCCIX-Llama2-13B
54
+
55
+
56
+ The UCCIX-Llama2-13B Large Language Model (LLM) is an Irish-English bilingual model, capables of understanding both languages and outperforms much larger models on Irish language tasks.
57
+ The model is based on Llama 2-13B, with vocabulary expansion to include native Irish tokens, and additional continued pre-training on our collection of ~520M Irish tokens (available at https://huggingface.co/datasets/ReliableAI/Irish-Text-Collection).
58
+
59
+ UCCIX is a pioneering effort on the development of first-ever open-source Irish-based LLM. You can find more details at: https://arxiv.org/abs/2405.13010
60
+
61
+ Access the instruction-tuned version at https://huggingface.co/ReliableAI/UCCIX-Llama2-13B-Instruct, and interact with it live at: https://aine.chat
62
+
63
+ ## Run the model
64
+
65
+ Run the model with the transformers library:
66
+ ```python
67
+ from transformers import AutoModelForCausalLM, AutoTokenizer
68
+ import torch
69
+ model_id = "ReliableAI/UCCIX-Llama2-13B-Instruct"
70
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
71
+ model = AutoModelForCausalLM.from_pretrained(model_id,
72
+ device_map="auto",
73
+ dtype=torch.float16 # optional, load in 16-bit precision mode to reduce memory usage
74
+ )
75
+ model.eval()
76
+
77
+ input = "I love the environment."
78
+ input_ids = tokenizer(input, return_tensors="pt")["input_ids"]
79
+ generated_token_ids = model.generate(
80
+ inputs=input_ids,
81
+ max_new_tokens=100,
82
+ do_sample=True,
83
+ temperature=0.6,
84
+ top_p=1,
85
+ )[0]
86
+ generated_text = tokenizer.decode(generated_token_ids)
87
+ ```
88
+
89
+ ## Notice
90
+
91
+ As a pioneering effort, the UCCIX model does not have any moderation mechanisms at the moment. We anticipate collaborating with the community to refine the model's adherence to restrictions so that it can be implemented in settings that demand moderated outcomes.
92
+
93
+ ## Citation
94
+ ```
95
+ @misc{tran2024uccix,
96
+ title={UCCIX: Irish-eXcellence Large Language Model},
97
+ author={Khanh-Tung Tran and Barry O'Sullivan and Hoang D. Nguyen},
98
+ year={2024},
99
+ eprint={2405.13010},
100
+ archivePrefix={arXiv},
101
+ primaryClass={cs.CL}
102
+ }
103
+ ```
104
+