shantipriya commited on
Commit
4c2bb6b
1 Parent(s): 70de0d5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -4
README.md CHANGED
@@ -9,10 +9,11 @@ model-index:
9
 
10
  # Llama3_8B_Odia_Unsloth
11
 
12
- Llama3_8B_Odia_Unsloth is a fine-tuned Odia large language model with 8 billion parameters, and it is based on Llama3. The model is fine-tuned on the 171k Odia instruction set including domain and cultural information.
13
- The fine-tuning uses Unsloth for faster training.
14
 
15
- For more details about the model, data, training procedure, and evaluations, go through the blog [post]().
 
 
16
 
17
  ## Model Description
18
  * Model type: A 8B fine-tuned model
@@ -20,6 +21,49 @@ For more details about the model, data, training procedure, and evaluations, go
20
  * License: Llama3
21
 
22
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
  ### Citation Information
25
 
@@ -38,7 +82,7 @@ If you find this model useful, please consider giving 👏 and citing:
38
 
39
  ### Contributions
40
 
 
41
  - Sambit Sekhar
42
- - Shantipriya Parida
43
  - Debasish Dhal
44
  - Shakshi Panwar
 
9
 
10
  # Llama3_8B_Odia_Unsloth
11
 
12
+ Llama3_8B_Odia_Unsloth is a fine-tuned Odia large language model with 8 billion parameters, and it is based on Llama3. The model is fine-tuned on a comprehensive [171k Odia instruction set](https://huggingface.co/datasets/OdiaGenAI/all_combined_odia_171k), encompassing domain-specific and cultural nuances.
 
13
 
14
+ The fine-tuning process leverages Unsloth, expediting the training process for optimal efficiency.
15
+
16
+ For more details about the model, data, training procedure, and evaluations, go through the blog [post](https://www.odiagenai.org/blog/odiagenai-releases-llama3-fine-tuned-model-for-the-odia-language).
17
 
18
  ## Model Description
19
  * Model type: A 8B fine-tuned model
 
21
  * License: Llama3
22
 
23
 
24
+ ## Inference
25
+
26
+ Sample inference script.
27
+
28
+ ```
29
+ from unsloth import FastLanguageModel
30
+ import torch
31
+ max_seq_length = 2048
32
+ dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
33
+ load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
34
+
35
+ model, tokenizer = FastLanguageModel.from_pretrained(
36
+ model_name = "OdiaGenAI-LLM/Llama3_8B_Odia_Unsloth",
37
+ max_seq_length = max_seq_length,
38
+ dtype = dtype,
39
+ load_in_4bit = load_in_4bit,
40
+ )
41
+
42
+ alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
43
+
44
+ ### Instruction:
45
+ {}
46
+
47
+ ### Input:
48
+ {}
49
+
50
+ ### Response:
51
+ {}"""
52
+
53
+ FastLanguageModel.for_inference(model)
54
+ inputs = tokenizer(
55
+ [
56
+ alpaca_prompt.format(
57
+ "ଓଡିଶାର ରାଜଧାନୀ କ’ଣ?", # instruction
58
+ "", # input
59
+ "", # output - leave this blank for generation!
60
+ )
61
+ ], return_tensors = "pt").to("cuda")
62
+
63
+ outputs = model.generate(**inputs, max_new_tokens = 512, use_cache = True)
64
+ tokenizer.batch_decode(outputs)
65
+
66
+ ```
67
 
68
  ### Citation Information
69
 
 
82
 
83
  ### Contributions
84
 
85
+ - Dr.Shantipriya Parida
86
  - Sambit Sekhar
 
87
  - Debasish Dhal
88
  - Shakshi Panwar