Edit model card

GPT-2Geo: Georgian Language Model ๐Ÿ‡ฌ๐Ÿ‡ช

โš ๏ธ This GPT-2Geo model is not fully trained due to hardware constraints. It has been trained on a subset of 1000 samples for 20 epochs. The model's capabilities and performance are indicative within these limitations. Future iterations may benefit from extended training on more extensive datasets. Please be mindful of these training constraints when utilizing the model.

Overview

GPT-2Geo is a powerful language model tailored for the Georgian language, built upon OpenAI's GPT-2 architecture. This model is designed for various natural language processing tasks, including text generation and understanding. Github (training script)

Features

  • Georgian Language Model: Specifically trained to understand and generate text in the Georgian language.
  • GPT-2 Architecture: Built upon OpenAI's GPT-2, providing a versatile and efficient language model.
  • Easy Integration: Seamless integration with the Hugging Face Transformers library.

Training Information

Environment:

  • GPU: Nvidia T4 (15GB)
  • Model Memory Requirement: Minimum 13.5GB

Training Configuration:

  • Number of Epochs: 20
  • Time Consumed: 49 minutes

#loss

Training Progress:

The GPT-2Geo model underwent training in a high-performance environment utilizing the Nvidia T4 GPU with 15GB of dedicated memory. This powerful hardware met the minimum model memory requirement of 13.5GB, ensuring optimal performance during the training process.

The training configuration included 20 epochs, allowing the model to iteratively learn from the dataset. The entire training procedure was completed in a time-efficient manner, consuming approximately 49 minutes.

For detailed insights into the model's performance, refer to the training logs, which capture key metrics such as validation loss over epochs. This information provides users with a comprehensive understanding of the training environment, configuration, and progress.

Ensure that your GPU environment is correctly configured to harness the full potential of the available hardware during the training phase. Before start training process it needs to preprocess text data and it will added in future

Example Usage

import torch

from transformers import GPT2LMHeadModel, GPT2Config, ElectraTokenizerFast

model_name = 'Kuduxaaa/gpt2-geo'
model = GPT2LMHeadModel.from_pretrained(model_name)
tokenizer = ElectraTokenizerFast.from_pretrained(model_name)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.to(device)

prompt = 'แƒฅแƒแƒ แƒ—แƒฃแƒš แƒ›แƒ˜แƒ—แƒแƒšแƒแƒ’แƒ˜แƒแƒจแƒ˜ '

input_ids = tokenizer.encode(prompt, return_tensors='pt').to(device)
output = model.generate(
    input_ids,
    max_length = 100,
    num_beams = 5,
    no_repeat_ngram_size = 2,
    top_k = 50,
    top_p = 0.95,
    temperature = 0.7
)

result = tokenizer.decode(output[0], skip_special_tokens=True)
print(result)
# แƒฅแƒแƒ แƒ—แƒฃแƒš แƒ›แƒ˜แƒ—แƒแƒšแƒแƒ’แƒ˜แƒแƒจแƒ˜, แƒ›แƒ˜แƒ—แƒ”แƒ‘แƒ˜แƒก แƒžแƒ”แƒ แƒกแƒแƒœแƒแƒŸแƒ”แƒ‘แƒ˜ แƒ“แƒ. แƒ›แƒ˜แƒ—แƒ”แƒ‘แƒ˜ แƒ“แƒแƒ™แƒแƒ•แƒจแƒ˜แƒ แƒ”แƒ‘แƒฃแƒšแƒ˜ แƒ›แƒ˜แƒ—แƒฃแƒ แƒ˜ แƒฌแƒแƒ แƒ›แƒแƒจแƒแƒ‘แƒแƒก, แƒ แƒแƒ›แƒ”แƒšแƒ˜แƒช แƒฌแƒแƒ แƒ›แƒแƒ˜แƒจแƒ•แƒ แƒ›แƒ˜แƒ—แƒ˜ แƒ’แƒแƒ แƒ”แƒ›แƒแƒช, แƒ แƒแƒ› แƒแƒ› แƒžแƒ”แƒ แƒ˜แƒแƒ“แƒจแƒ˜ แƒ“แƒ แƒกแƒฎแƒ•แƒ แƒกแƒฎแƒ•แƒ. แƒแƒ’แƒ แƒ”แƒ—แƒ•แƒ” แƒ›แƒ˜แƒ—แƒ˜แƒ“แƒแƒœ แƒฌแƒแƒ แƒ›แƒแƒแƒ“แƒ’แƒ”แƒœแƒก แƒ›แƒ˜แƒ—แƒ”แƒ‘แƒ—แƒแƒœ แƒ”แƒ แƒ—แƒแƒ“, แƒ แƒแƒ’แƒแƒ แƒช แƒกแƒแƒจแƒฃแƒแƒšแƒ”แƒ‘แƒ”แƒ‘แƒ˜, แƒกแƒแƒคแƒฃแƒซแƒ•แƒšแƒแƒ“ แƒฌแƒแƒ แƒ›แƒแƒ”แƒ‘แƒ˜แƒก แƒฌแƒแƒ แƒกแƒฃแƒšแƒจแƒ˜. แƒšแƒ˜แƒขแƒ”แƒ แƒแƒขแƒฃแƒ แƒ แƒฌแƒแƒ แƒ›แƒแƒ›แƒแƒ•แƒšแƒแƒ‘แƒ”แƒ‘แƒก แƒ›แƒ˜แƒ—

Acknowledgments

This project is made possible by the contributions of Nika Kudukashvili and the open-source community. Special thanks to OpenAI for the GPT-2 architecture and jnz/electra-ka for georgian tokenizer.

Downloads last month
3
Safetensors
Model size
133M params
Tensor type
F32
ยท

Dataset used to train Kuduxaaa/gpt2-geo