telugu_gemma / README.md
anudeepadi's picture
Update README.md
3bc0a63 verified
|
raw
history blame
2.55 kB

๐Ÿ‡ฎ๐Ÿ‡ณ Telugu Gemma 7B Model ๐Ÿ’ฌ

Welcome to the Telugu Gemma 7B model! This model brings the power of the Gemma 7B architecture to the Telugu language, enabling engaging conversational AI in Telugu. ๐ŸŒŸ

๐Ÿš€ Model Highlights

  • Developed by: Anudeep Adi ๐Ÿ‘จโ€๐Ÿ’ป
  • Model architecture: Gemma 7B ๐Ÿง 
  • Language: Telugu ๐Ÿ‡ฎ๐Ÿ‡ณ
  • License: Apache 2.0 โš–๏ธ
  • Base model: unsloth/gemma-7b-bnb-4bit ๐ŸŒฟ
  • Dataset: telugu_teknium_GPTeacher_general_instruct_filtered_romanized ๐Ÿ“š
  • Finetuning steps: 60 ๐Ÿƒโ€โ™‚๏ธ
  • Finetuning dataset size: 43,614 examples ๐Ÿ“ˆ
  • Tags: text-generation-inference, transformers, unsloth, gemma, trl ๐Ÿท๏ธ

๐Ÿ—ฃ๏ธ Usage

Want to engage in Telugu conversations with an AI? This model makes it easy! Simply provide an instruction and optional input prompt in the Alpaca format:

Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{instruction}

### Input: 
{input}

### Response:
{output}

Replace {instruction}, {input} and {output} with your desired Telugu text. Watch in amazement as the model generates a fluent continuation for the {output} field! โœจ

๐Ÿ‹๏ธโ€โ™€๏ธ Training Procedure

This model didn't skip leg day! It was finetuned on the telugu_teknium_GPTeacher_general_instruct_filtered_romanized dataset containing a whopping 43,614 examples of Telugu instructions and outputs. ๐Ÿ’ช

But how did we make training lightning fast? By using Unsloth and the TRL library from Huggingface, enabling 2x faster training! โšก We also added LoRA adapters for efficient finetuning of the 7B parameter model and used mixed precision training with bfloat16.

And the best part? Unsloth provides optimized inference code to run the model 2x faster. Talk about having your cake and eating it too! ๐Ÿฐ

โš ๏ธ Limitations

While this model is impressive, it's not perfect. It was trained on a relatively small Telugu dataset, so its knowledge and conversational abilities in Telugu are limited compared to larger language models. It may occasionally make factual errors or inconsistent statements.

So use this model as an experimental prototype and have fun chatting, but don't rely on it for mission-critical Telugu conversations just yet! ๐Ÿ˜‰