RichardErkhov commited on
Commit
b38ecb4
•
1 Parent(s): 1028aaa

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +119 -0
README.md ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ ALMA-13B-R - GGUF
11
+ - Model creator: https://huggingface.co/haoranxu/
12
+ - Original model: https://huggingface.co/haoranxu/ALMA-13B-R/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [ALMA-13B-R.Q2_K.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_ALMA-13B-R-gguf/blob/main/ALMA-13B-R.Q2_K.gguf) | Q2_K | 4.52GB |
18
+ | [ALMA-13B-R.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_ALMA-13B-R-gguf/blob/main/ALMA-13B-R.IQ3_XS.gguf) | IQ3_XS | 4.99GB |
19
+ | [ALMA-13B-R.IQ3_S.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_ALMA-13B-R-gguf/blob/main/ALMA-13B-R.IQ3_S.gguf) | IQ3_S | 5.27GB |
20
+ | [ALMA-13B-R.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_ALMA-13B-R-gguf/blob/main/ALMA-13B-R.Q3_K_S.gguf) | Q3_K_S | 5.27GB |
21
+ | [ALMA-13B-R.IQ3_M.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_ALMA-13B-R-gguf/blob/main/ALMA-13B-R.IQ3_M.gguf) | IQ3_M | 5.57GB |
22
+ | [ALMA-13B-R.Q3_K.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_ALMA-13B-R-gguf/blob/main/ALMA-13B-R.Q3_K.gguf) | Q3_K | 5.9GB |
23
+ | [ALMA-13B-R.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_ALMA-13B-R-gguf/blob/main/ALMA-13B-R.Q3_K_M.gguf) | Q3_K_M | 5.9GB |
24
+ | [ALMA-13B-R.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_ALMA-13B-R-gguf/blob/main/ALMA-13B-R.Q3_K_L.gguf) | Q3_K_L | 6.45GB |
25
+ | [ALMA-13B-R.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_ALMA-13B-R-gguf/blob/main/ALMA-13B-R.IQ4_XS.gguf) | IQ4_XS | 6.54GB |
26
+ | [ALMA-13B-R.Q4_0.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_ALMA-13B-R-gguf/blob/main/ALMA-13B-R.Q4_0.gguf) | Q4_0 | 6.86GB |
27
+ | [ALMA-13B-R.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_ALMA-13B-R-gguf/blob/main/ALMA-13B-R.IQ4_NL.gguf) | IQ4_NL | 6.9GB |
28
+ | [ALMA-13B-R.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_ALMA-13B-R-gguf/blob/main/ALMA-13B-R.Q4_K_S.gguf) | Q4_K_S | 6.91GB |
29
+ | [ALMA-13B-R.Q4_K.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_ALMA-13B-R-gguf/blob/main/ALMA-13B-R.Q4_K.gguf) | Q4_K | 7.33GB |
30
+ | [ALMA-13B-R.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_ALMA-13B-R-gguf/blob/main/ALMA-13B-R.Q4_K_M.gguf) | Q4_K_M | 7.33GB |
31
+ | [ALMA-13B-R.Q4_1.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_ALMA-13B-R-gguf/blob/main/ALMA-13B-R.Q4_1.gguf) | Q4_1 | 7.61GB |
32
+ | [ALMA-13B-R.Q5_0.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_ALMA-13B-R-gguf/blob/main/ALMA-13B-R.Q5_0.gguf) | Q5_0 | 8.36GB |
33
+ | [ALMA-13B-R.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_ALMA-13B-R-gguf/blob/main/ALMA-13B-R.Q5_K_S.gguf) | Q5_K_S | 8.36GB |
34
+ | [ALMA-13B-R.Q5_K.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_ALMA-13B-R-gguf/blob/main/ALMA-13B-R.Q5_K.gguf) | Q5_K | 8.6GB |
35
+ | [ALMA-13B-R.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_ALMA-13B-R-gguf/blob/main/ALMA-13B-R.Q5_K_M.gguf) | Q5_K_M | 8.6GB |
36
+ | [ALMA-13B-R.Q5_1.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_ALMA-13B-R-gguf/blob/main/ALMA-13B-R.Q5_1.gguf) | Q5_1 | 9.1GB |
37
+ | [ALMA-13B-R.Q6_K.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_ALMA-13B-R-gguf/blob/main/ALMA-13B-R.Q6_K.gguf) | Q6_K | 9.95GB |
38
+ | [ALMA-13B-R.Q8_0.gguf](https://huggingface.co/RichardErkhov/haoranxu_-_ALMA-13B-R-gguf/blob/main/ALMA-13B-R.Q8_0.gguf) | Q8_0 | 12.88GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ license: mit
46
+ ---
47
+ **[ALMA-R](https://arxiv.org/abs/2401.08417)** builds upon [ALMA models](https://arxiv.org/abs/2309.11674), with further LoRA fine-tuning with our proposed **Contrastive Preference Optimization (CPO)** as opposed to the Supervised Fine-tuning used in ALMA. CPO fine-tuning requires our [triplet preference data](https://huggingface.co/datasets/haoranxu/ALMA-R-Preference) for preference learning. ALMA-R now can matches or even exceeds GPT-4 or WMT winners!
48
+ ```
49
+ @misc{xu2024contrastive,
50
+ title={Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation},
51
+ author={Haoran Xu and Amr Sharaf and Yunmo Chen and Weiting Tan and Lingfeng Shen and Benjamin Van Durme and Kenton Murray and Young Jin Kim},
52
+ year={2024},
53
+ eprint={2401.08417},
54
+ archivePrefix={arXiv},
55
+ primaryClass={cs.CL}
56
+ }
57
+ ```
58
+ ```
59
+ @misc{xu2023paradigm,
60
+ title={A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models},
61
+ author={Haoran Xu and Young Jin Kim and Amr Sharaf and Hany Hassan Awadalla},
62
+ year={2023},
63
+ eprint={2309.11674},
64
+ archivePrefix={arXiv},
65
+ primaryClass={cs.CL}
66
+ }
67
+ ```
68
+ # Download ALMA(-R) Models and Dataset 🚀
69
+
70
+ We release six translation models presented in the paper:
71
+ - ALMA-7B
72
+ - ALMA-7B-LoRA
73
+ - **ALMA-7B-R (NEW!)**: Further LoRA fine-tuning upon ALMA-7B-LoRA with contrastive preference optimization.
74
+ - ALMA-13B
75
+ - ALMA-13B-LoRA
76
+ - **ALMA-13B-R (NEW!)**: Further LoRA fine-tuning upon ALMA-13B-LoRA with contrastive preference optimization (BEST MODEL!).
77
+
78
+ Model checkpoints are released at huggingface:
79
+ | Models | Base Model Link | LoRA Link |
80
+ |:-------------:|:---------------:|:---------:|
81
+ | ALMA-7B | [haoranxu/ALMA-7B](https://huggingface.co/haoranxu/ALMA-7B) | - |
82
+ | ALMA-7B-LoRA | [haoranxu/ALMA-7B-Pretrain](https://huggingface.co/haoranxu/ALMA-7B-Pretrain) | [haoranxu/ALMA-7B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-7B-Pretrain-LoRA) |
83
+ | **ALMA-7B-R (NEW!)** | [haoranxu/ALMA-7B-R (LoRA merged)](https://huggingface.co/haoranxu/ALMA-7B-R) | - |
84
+ | ALMA-13B | [haoranxu/ALMA-13B](https://huggingface.co/haoranxu/ALMA-13B) | - |
85
+ | ALMA-13B-LoRA | [haoranxu/ALMA-13B-Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain) | [haoranxu/ALMA-13B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-13B-Pretrain-LoRA) |
86
+ | **ALMA-13B-R (NEW!)** | [haoranxu/ALMA-13B-R (LoRA merged)](https://huggingface.co/haoranxu/ALMA-13B-R) | - |
87
+
88
+ **Note that `ALMA-7B-Pretrain` and `ALMA-13B-Pretrain` are NOT translation models. They only experience stage 1 monolingual fine-tuning (20B tokens for the 7B model and 12B tokens for the 13B model), and should be utilized in conjunction with their LoRA models.**
89
+
90
+ Datasets used by ALMA and ALMA-R are also released at huggingface now (NEW!)
91
+ | Datasets | Train / Validation| Test |
92
+ |:-------------:|:---------------:|:---------:|
93
+ | Human-Written Parallel Data (ALMA) | [train and validation](https://huggingface.co/datasets/haoranxu/ALMA-Human-Parallel) | [WMT'22](https://huggingface.co/datasets/haoranxu/WMT22-Test) |
94
+ | Triplet Preference Data | [train](https://huggingface.co/datasets/haoranxu/ALMA-R-Preference) | [WMT'22](https://huggingface.co/datasets/haoranxu/WMT22-Test) and [WMT'23](https://huggingface.co/datasets/haoranxu/WMT23-Test) |
95
+
96
+
97
+ A quick start to use our best system (ALMA-13B-R) for translation. An example of translating "我爱机器翻译。" into English:
98
+ ```
99
+ import torch
100
+ from transformers import AutoModelForCausalLM
101
+ from transformers import AutoTokenizer
102
+
103
+ # Load base model and LoRA weights
104
+ model = AutoModelForCausalLM.from_pretrained("haoranxu/ALMA-13B-R", torch_dtype=torch.float16, device_map="auto")
105
+ tokenizer = AutoTokenizer.from_pretrained("haoranxu/ALMA-13B-R", padding_side='left')
106
+
107
+ # Add the source sentence into the prompt template
108
+ prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:"
109
+ input_ids = tokenizer(prompt, return_tensors="pt", padding=True, max_length=40, truncation=True).input_ids.cuda()
110
+
111
+ # Translation
112
+ with torch.no_grad():
113
+ generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9)
114
+ outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
115
+ print(outputs)
116
+ ```
117
+
118
+ Please find more details in our [GitHub repository](https://github.com/fe1ixxu/ALMA)
119
+