RichardErkhov commited on
Commit
f36d401
1 Parent(s): dca2e73

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +105 -0
README.md ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ Curie-7B-v1 - GGUF
11
+ - Model creator: https://huggingface.co/szymonrucinski/
12
+ - Original model: https://huggingface.co/szymonrucinski/Curie-7B-v1/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [Curie-7B-v1.Q2_K.gguf](https://huggingface.co/RichardErkhov/szymonrucinski_-_Curie-7B-v1-gguf/blob/main/Curie-7B-v1.Q2_K.gguf) | Q2_K | 2.53GB |
18
+ | [Curie-7B-v1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/szymonrucinski_-_Curie-7B-v1-gguf/blob/main/Curie-7B-v1.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
19
+ | [Curie-7B-v1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/szymonrucinski_-_Curie-7B-v1-gguf/blob/main/Curie-7B-v1.IQ3_S.gguf) | IQ3_S | 2.96GB |
20
+ | [Curie-7B-v1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/szymonrucinski_-_Curie-7B-v1-gguf/blob/main/Curie-7B-v1.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
21
+ | [Curie-7B-v1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/szymonrucinski_-_Curie-7B-v1-gguf/blob/main/Curie-7B-v1.IQ3_M.gguf) | IQ3_M | 3.06GB |
22
+ | [Curie-7B-v1.Q3_K.gguf](https://huggingface.co/RichardErkhov/szymonrucinski_-_Curie-7B-v1-gguf/blob/main/Curie-7B-v1.Q3_K.gguf) | Q3_K | 3.28GB |
23
+ | [Curie-7B-v1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/szymonrucinski_-_Curie-7B-v1-gguf/blob/main/Curie-7B-v1.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
24
+ | [Curie-7B-v1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/szymonrucinski_-_Curie-7B-v1-gguf/blob/main/Curie-7B-v1.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
25
+ | [Curie-7B-v1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/szymonrucinski_-_Curie-7B-v1-gguf/blob/main/Curie-7B-v1.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
26
+ | [Curie-7B-v1.Q4_0.gguf](https://huggingface.co/RichardErkhov/szymonrucinski_-_Curie-7B-v1-gguf/blob/main/Curie-7B-v1.Q4_0.gguf) | Q4_0 | 3.83GB |
27
+ | [Curie-7B-v1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/szymonrucinski_-_Curie-7B-v1-gguf/blob/main/Curie-7B-v1.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
28
+ | [Curie-7B-v1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/szymonrucinski_-_Curie-7B-v1-gguf/blob/main/Curie-7B-v1.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
29
+ | [Curie-7B-v1.Q4_K.gguf](https://huggingface.co/RichardErkhov/szymonrucinski_-_Curie-7B-v1-gguf/blob/main/Curie-7B-v1.Q4_K.gguf) | Q4_K | 4.07GB |
30
+ | [Curie-7B-v1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/szymonrucinski_-_Curie-7B-v1-gguf/blob/main/Curie-7B-v1.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
31
+ | [Curie-7B-v1.Q4_1.gguf](https://huggingface.co/RichardErkhov/szymonrucinski_-_Curie-7B-v1-gguf/blob/main/Curie-7B-v1.Q4_1.gguf) | Q4_1 | 4.24GB |
32
+ | [Curie-7B-v1.Q5_0.gguf](https://huggingface.co/RichardErkhov/szymonrucinski_-_Curie-7B-v1-gguf/blob/main/Curie-7B-v1.Q5_0.gguf) | Q5_0 | 4.65GB |
33
+ | [Curie-7B-v1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/szymonrucinski_-_Curie-7B-v1-gguf/blob/main/Curie-7B-v1.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
34
+ | [Curie-7B-v1.Q5_K.gguf](https://huggingface.co/RichardErkhov/szymonrucinski_-_Curie-7B-v1-gguf/blob/main/Curie-7B-v1.Q5_K.gguf) | Q5_K | 4.78GB |
35
+ | [Curie-7B-v1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/szymonrucinski_-_Curie-7B-v1-gguf/blob/main/Curie-7B-v1.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
36
+ | [Curie-7B-v1.Q5_1.gguf](https://huggingface.co/RichardErkhov/szymonrucinski_-_Curie-7B-v1-gguf/blob/main/Curie-7B-v1.Q5_1.gguf) | Q5_1 | 5.07GB |
37
+ | [Curie-7B-v1.Q6_K.gguf](https://huggingface.co/RichardErkhov/szymonrucinski_-_Curie-7B-v1-gguf/blob/main/Curie-7B-v1.Q6_K.gguf) | Q6_K | 5.53GB |
38
+
39
+
40
+
41
+
42
+ Original model description:
43
+ ---
44
+ license: apache-2.0
45
+ language:
46
+ - pl
47
+ library_name: transformers
48
+ tags:
49
+ - polish
50
+ - nlp
51
+ ---
52
+ <style>
53
+ @import url('https://fonts.googleapis.com/css2?family=Pacifico&display=swap')
54
+ .markdown-custom-font {
55
+ font-family: "Pacifico", cursive;
56
+ font-weight: 400;
57
+ font-style: normal;
58
+ }
59
+ </style>
60
+
61
+ <div class="markdown-custom-font" align="center">
62
+ <img src="logo.png" alt="Logo" width="300">
63
+ Curie-7B-v1
64
+ </div>
65
+
66
+ ## Introduction
67
+ This research demonstrates the potential of fine-tuning English Large Language Models (LLMs) for Polish text generation. By employing Language Adaptive Pre-training (LAPT) on a high-quality dataset of 3.11 GB (276 million Polish tokens) and subsequent fine-tuning on the [KLEJ challenges](https://klejbenchmark.com), the `Curie-7B-v1` model achieves remarkable performance. It not only generates Polish text with the lowest perplexity of 3.02 among decoder-based models but also rivals the best Polish encoder-decoder models closely, with a minimal performance gap on 8 out of 9 tasks. This was accomplished using about 2-3% of the dataset size typically required, showcasing the method's efficiency. The model is now open-source, contributing to the community's collaborative progress.
68
+ ### Language Adaptive Pre-training Dataset
69
+ The LAPT phase utilized the [SpeakLeash dataset](http://speakleash.org/en/), a comprehensive collection of Polish texts, focusing on the highest quality extract of approximately 2 GB from the original 1TB.
70
+ ## Hardware and Software Stack
71
+ Experiments were conducted on a server featuring an [NVIDIA RTX A6000 ADA GPU](https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/rtx-6000/proviz-print-rtx6000-datasheet-web-2504660.pdf) with 48GB of VRAM, AMD Epyc 7742 processor, and running Ubuntu with Pytorch 2.0 and CUDA 12.2.
72
+ ## The Adaptive Pre-training
73
+ The model was trained using AdamW optimizer, with specific hyperparameters aimed at optimizing performance. Training completed in one epoch, taking a total of 106 hours, demonstrating the onset of overfitting beyond this point.
74
+ ### Hyperparameters
75
+ - **lora_rank:** 32
76
+ - **lora_dropout:** 0.05
77
+ - **lora_alpha:** 16
78
+ - **warmup_steps:** 0.1
79
+ - **learning_rate:** 2.5 x 10^-5
80
+ - **neftune_noise_alpha:** 2
81
+ - **batch_size:** 128
82
+ - **max_seq_len:** 128
83
+
84
+ ## Fine-tuning for KLEJ Downstream Tasks
85
+ `Curie-7B-v1` was exceptionally close to the best baseline models on 8 of 9 KLEJ tasks by using significantly less data, showcasing its efficiency and capability in handling a variety of NLP tasks in Polish.
86
+
87
+ ### Performance Highlights
88
+ - **NKJP-NER:** 93.4
89
+ - **CDSC-E:** 92.2
90
+ - **CDSC-R:** 94.9
91
+ - **CBD:** 49.0 (Demonstrating room for improvement)
92
+ - **PolEmo2.0-IN:** 92.7
93
+ - **PolEmo2.0-OUT:** 80.0
94
+ - **DYK:** 76.2
95
+ - **PSC:** 98.6
96
+ - **AR:** 86.8
97
+
98
+ ## Conclusions
99
+ The `Curie-7B-v1` model, through LAPT, matches foundational models on eight downstream tasks with significantly less data. Its versatility in generating Polish text and the ability to be transformed into classifiers, regressors, and AI assistants highlights the method's effectiveness. This open-source Polish LLM provides a foundation for developing efficient business solutions.
100
+
101
+ ## Research Paper
102
+ Work and details regarding this model are described in the reserach paper [Efficient Language Adaptive Pre-training: Extending State-of-the-Art Large Language Models for Polish](https://arxiv.org/abs/2402.09759) by Szymon Ruciński.
103
+
104
+
105
+