tahauzumcu commited on
Commit
a042dde
1 Parent(s): be44e8d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +101 -3
README.md CHANGED
@@ -1,3 +1,101 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ base_model: unsloth/llama-3-8b-bnb-4bit
4
+ ---
5
+
6
+ # VeriUS LLM 8b v0.2
7
+
8
+ VeriUS LLM is a generative model that is fine-tuned on Llama-3-8B (Unsloth).
9
+
10
+
11
+ ## Model Details
12
+ Base Model: unsloth/llama-3-8b-bnb-4bit
13
+
14
+ Training Dataset: A combined dataset of alpaca, dolly and bactrainx which is translated to turkish.
15
+
16
+ Training Method: Fine-tuned with Unsloth, which uses QLoRA. Used ORPO
17
+
18
+ #TrainingArguments
19
+ PER_DEVICE_BATCH_SIZE: 2
20
+ GRADIENT_ACCUMULATION_STEPS: 4
21
+ WARMUP_RATIO: 0.03
22
+ NUM_EPOCHS: 2
23
+ LR: 0.000008
24
+ OPTIM: "adamw_8bit"
25
+ WEIGHT_DECAY: 0.01
26
+ LR_SCHEDULER_TYPE: "linear"
27
+ BETA: 0.1
28
+
29
+ #PEFT Arguments
30
+ RANK: 128
31
+ TARGET_MODULES:
32
+ - "q_proj"
33
+ - "k_proj"
34
+ - "v_proj"
35
+ - "o_proj"
36
+ - "gate_proj"
37
+ - "up_proj"
38
+ - "down_proj"
39
+
40
+ LORA_ALPHA: 256
41
+ LORA_DROPOUT: 0
42
+ BIAS: "none"
43
+ GRADIENT_CHECKPOINT: 'unsloth'
44
+ USE_RSLORA: false
45
+
46
+ ## Usage
47
+ This model is trained used Unsloth and uses it for fast inference. For Unsloth installation please refer to: https://github.com/unslothai/unsloth
48
+
49
+ This model can also be loaded with AutoModelForCausalLM
50
+
51
+ How to load with unsloth:
52
+ ```commandline
53
+ from unsloth import FastLanguageModel
54
+
55
+ max_seq_len = 2048
56
+ model, tokenizer = FastLanguageModel.from_pretrained(
57
+ model_name="VeriUs/VeriUS-LLM-8b-v0.2",
58
+ max_seq_length=max_seq_len,
59
+ dtype=None,
60
+ load_in_4bit=True
61
+ )
62
+ FastLanguageModel.for_inference(model) # Enable native 2x faster inference
63
+
64
+ prompt_tempate = """Aşağıda, görevini açıklayan bir talimat ve daha fazla bağlam sağlayan bir girdi verilmiştir. İsteği uygun bir şekilde tamamlayan bir yanıt yaz.
65
+
66
+ ### Talimat:
67
+ {}
68
+
69
+ ### Girdi:
70
+ {}
71
+
72
+ ### Yanıt:
73
+ """
74
+
75
+
76
+ def generate_output(instruction, user_input):
77
+ input_ids = tokenizer(
78
+ [
79
+ prompt_tempate.format(instruction, user_input)
80
+ ], return_tensors="pt").to("cuda")
81
+
82
+ outputs = model.generate(**input_ids, max_length=max_seq_len, do_sample=True)
83
+
84
+ # removes prompt, comment this out if you want to see it.
85
+ outputs = [output[len(input_ids[i].ids):] for i, output in enumerate(outputs)]
86
+
87
+ return tokenizer.decode(outputs[0], skip_special_tokens=True)
88
+
89
+
90
+ response = generate_output("Türkiye'nin en kalabalık şehri hangisidir?", "")
91
+ print(response)
92
+ ```
93
+
94
+ ## Bias, Risks, and Limitations
95
+
96
+ Limitations and Known Biases
97
+ Primary Function and Application: VeriUS LLM, an autoregressive language model, is primarily designed to predict the next token in a text string. While often used for various applications, it is important to note that it has not undergone extensive real-world application testing. Its effectiveness and reliability across diverse scenarios remain largely unverified.
98
+
99
+ Language Comprehension and Generation: The base model is primarily trained in standard English. Even though it fine-tuned with and Turkish dataset, its performance in understanding and generating slang, informal language, or other languages may be limited, leading to potential errors or misinterpretations.
100
+
101
+ Generation of False Information: Users should be aware that VeriUS LLM may produce inaccurate or misleading information. Outputs should be considered as starting points or suggestions rather than definitive answers.