Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Curie-7B-v1 - GGUF

Name Quant method Size
Curie-7B-v1.Q2_K.gguf Q2_K 2.53GB
Curie-7B-v1.IQ3_XS.gguf IQ3_XS 2.81GB
Curie-7B-v1.IQ3_S.gguf IQ3_S 2.96GB
Curie-7B-v1.Q3_K_S.gguf Q3_K_S 2.95GB
Curie-7B-v1.IQ3_M.gguf IQ3_M 3.06GB
Curie-7B-v1.Q3_K.gguf Q3_K 3.28GB
Curie-7B-v1.Q3_K_M.gguf Q3_K_M 3.28GB
Curie-7B-v1.Q3_K_L.gguf Q3_K_L 3.56GB
Curie-7B-v1.IQ4_XS.gguf IQ4_XS 3.67GB
Curie-7B-v1.Q4_0.gguf Q4_0 3.83GB
Curie-7B-v1.IQ4_NL.gguf IQ4_NL 3.87GB
Curie-7B-v1.Q4_K_S.gguf Q4_K_S 3.86GB
Curie-7B-v1.Q4_K.gguf Q4_K 4.07GB
Curie-7B-v1.Q4_K_M.gguf Q4_K_M 4.07GB
Curie-7B-v1.Q4_1.gguf Q4_1 4.24GB
Curie-7B-v1.Q5_0.gguf Q5_0 4.65GB
Curie-7B-v1.Q5_K_S.gguf Q5_K_S 4.65GB
Curie-7B-v1.Q5_K.gguf Q5_K 4.78GB
Curie-7B-v1.Q5_K_M.gguf Q5_K_M 4.78GB
Curie-7B-v1.Q5_1.gguf Q5_1 5.07GB
Curie-7B-v1.Q6_K.gguf Q6_K 5.53GB

Original model description:

license: apache-2.0 language: - pl library_name: transformers tags: - polish - nlp

Logo Curie-7B-v1

Introduction

This research demonstrates the potential of fine-tuning English Large Language Models (LLMs) for Polish text generation. By employing Language Adaptive Pre-training (LAPT) on a high-quality dataset of 3.11 GB (276 million Polish tokens) and subsequent fine-tuning on the KLEJ challenges, the Curie-7B-v1 model achieves remarkable performance. It not only generates Polish text with the lowest perplexity of 3.02 among decoder-based models but also rivals the best Polish encoder-decoder models closely, with a minimal performance gap on 8 out of 9 tasks. This was accomplished using about 2-3% of the dataset size typically required, showcasing the method's efficiency. The model is now open-source, contributing to the community's collaborative progress.

Language Adaptive Pre-training Dataset

The LAPT phase utilized the SpeakLeash dataset, a comprehensive collection of Polish texts, focusing on the highest quality extract of approximately 2 GB from the original 1TB.

Hardware and Software Stack

Experiments were conducted on a server featuring an NVIDIA RTX A6000 ADA GPU with 48GB of VRAM, AMD Epyc 7742 processor, and running Ubuntu with Pytorch 2.0 and CUDA 12.2.

The Adaptive Pre-training

The model was trained using AdamW optimizer, with specific hyperparameters aimed at optimizing performance. Training completed in one epoch, taking a total of 106 hours, demonstrating the onset of overfitting beyond this point.

Hyperparameters

  • lora_rank: 32
  • lora_dropout: 0.05
  • lora_alpha: 16
  • warmup_steps: 0.1
  • learning_rate: 2.5 x 10^-5
  • neftune_noise_alpha: 2
  • batch_size: 128
  • max_seq_len: 128

Fine-tuning for KLEJ Downstream Tasks

Curie-7B-v1 was exceptionally close to the best baseline models on 8 of 9 KLEJ tasks by using significantly less data, showcasing its efficiency and capability in handling a variety of NLP tasks in Polish.

Performance Highlights

  • NKJP-NER: 93.4
  • CDSC-E: 92.2
  • CDSC-R: 94.9
  • CBD: 49.0 (Demonstrating room for improvement)
  • PolEmo2.0-IN: 92.7
  • PolEmo2.0-OUT: 80.0
  • DYK: 76.2
  • PSC: 98.6
  • AR: 86.8

Conclusions

The Curie-7B-v1 model, through LAPT, matches foundational models on eight downstream tasks with significantly less data. Its versatility in generating Polish text and the ability to be transformed into classifiers, regressors, and AI assistants highlights the method's effectiveness. This open-source Polish LLM provides a foundation for developing efficient business solutions.

Research Paper

Work and details regarding this model are described in the reserach paper Efficient Language Adaptive Pre-training: Extending State-of-the-Art Large Language Models for Polish by Szymon Ruciński.

Downloads last month
336
GGUF
Model size
7.24B params
Architecture
llama
+2
Unable to determine this model's library. Check the docs .