Text Generation
Transformers
Safetensors
Telugu
English
Inference Endpoints
Telugu-LLM-Labs commited on
Commit
dfd36d0
1 Parent(s): d28504c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -6,14 +6,18 @@ license_link: LICENSE
6
 
7
  # Telugu-gemma-7b-finetuned-sft
8
 
9
- This model is based on [google/gemma-7b](https://huggingface.co/google/gemma-7b) and hase been finetuned on instruction datasets:
10
  1. [yahma_alpaca_cleaned_telugu_filtered_and_romanized](https://huggingface.co/datasets/Telugu-LLM-Labs/yahma_alpaca_cleaned_telugu_filtered_and_romanized)
11
  2. [teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized](https://huggingface.co/datasets/Telugu-LLM-Labs/teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized)
12
 
13
- The model is finetuned using [unsloth](https://github.com/unslothai/unsloth) library and we provide inference code using the same for faster inference.
14
 
15
  The model is finetuned only on native telugu SFT data from above datasets and we will update the model with transliteration in upcoming days.
16
 
 
 
 
 
17
  # Input Text Format
18
 
19
  ```
@@ -65,8 +69,4 @@ response = tokenizer.batch_decode(outputs)
65
 
66
  # Developers:
67
 
68
- The model is a collaborative effort by [Ravi Theja](https://twitter.com/ravithejads) and [Ramsri Goutham](https://twitter.com/ramsri_goutham). Feel free to DM either of us if you have any questions.
69
-
70
- # Note:
71
-
72
- The model has demonstrated robust capabilities in our testing. If it does not meet your expectations, it may benefit from fine-tuning with suitable SFT datasets. Please do not hesitate to contact us for assistance; we are eager to support you.
 
6
 
7
  # Telugu-gemma-7b-finetuned-sft
8
 
9
+ This model is based on [google/gemma-7b](https://huggingface.co/google/gemma-7b) and hase been LoRA finetuned on instruction datasets:
10
  1. [yahma_alpaca_cleaned_telugu_filtered_and_romanized](https://huggingface.co/datasets/Telugu-LLM-Labs/yahma_alpaca_cleaned_telugu_filtered_and_romanized)
11
  2. [teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized](https://huggingface.co/datasets/Telugu-LLM-Labs/teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized)
12
 
13
+ The model is finetuned using [unsloth](https://github.com/unslothai/unsloth) library and we provide inference code using the same for faster inference. Alternatively you can use HuggingFace Library for inference.
14
 
15
  The model is finetuned only on native telugu SFT data from above datasets and we will update the model with transliteration in upcoming days.
16
 
17
+ # Installation
18
+
19
+ `!pip install "unsloth[colab-ampere] @git+https://github.com/unslothai/unsloth.git"`
20
+
21
  # Input Text Format
22
 
23
  ```
 
69
 
70
  # Developers:
71
 
72
+ The model is a collaborative effort by [Ravi Theja](https://twitter.com/ravithejads) and [Ramsri Goutham](https://twitter.com/ramsri_goutham). Feel free to DM either of us if you have any questions.