File size: 2,314 Bytes
078d151 33b8133 078d151 f66203b 078d151 f66203b 078d151 33b8133 2e10af5 33b8133 078d151 f66203b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 |
---
language:
- en
- hi
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
datasets:
- cmu_hinglish_dog
---
# Loss Curve
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65187b234965add2b08b2990/f-qJHUQGxN9yaXym_5u4V.png)
# Evaluation Loss
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65187b234965add2b08b2990/6VsNF_rgDjXlubd4x8dMk.png)
# Colab Files:
- Model_Use.ipynb file to use the model
- Hinglish_train_lamma_3_8b_instruct_2_epoch.ipynb to see how the model is trained
# Inference:
```
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
!pip install --no-deps xformers trl peft accelerate bitsandbytes
```
```python
from unsloth import FastLanguageModel
import torch
max_seq_length = 2048
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "suyash2739/English_to_Hinglish_lamma_3_8b_instruct",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
```
```python
prompt = """Translate the input from English to Hinglish to give the response.
### Input:
{}
### Response:
{}"""
```
```python
inputs = tokenizer(
[
prompt.format(
"""This is a fine-tuned Hinglish translation model using Llama 3.""", # input
"", # output - leave this blank for generation!
)
], return_tensors = "pt").to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
```
```python
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 2048)
## ye ek fine-tuned Hinglish translation model hai jisme Llama 3 use kiya gaya hai
```
# Uploaded model
- **Developed by:** suyash2739
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |