Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,11 @@
|
|
1 |
---
|
2 |
library_name: peft
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
## Training procedure
|
5 |
|
@@ -16,6 +22,12 @@ The following `bitsandbytes` quantization config was used during training:
|
|
16 |
- bnb_4bit_use_double_quant: True
|
17 |
- bnb_4bit_compute_dtype: bfloat16
|
18 |
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
## Model Summary
|
20 |
|
21 |
train/loss : 0.4354
|
@@ -63,4 +75,4 @@ Note: Change the max_new_tokens length based on the question-context text input
|
|
63 |
### Framework versions
|
64 |
|
65 |
|
66 |
-
- PEFT 0.4.0
|
|
|
1 |
---
|
2 |
library_name: peft
|
3 |
+
license: apache-2.0
|
4 |
+
datasets:
|
5 |
+
- b-mc2/sql-create-context
|
6 |
+
language:
|
7 |
+
- en
|
8 |
+
pipeline_tag: text-generation
|
9 |
---
|
10 |
## Training procedure
|
11 |
|
|
|
22 |
- bnb_4bit_use_double_quant: True
|
23 |
- bnb_4bit_compute_dtype: bfloat16
|
24 |
|
25 |
+
|
26 |
+
## Model Description
|
27 |
+
|
28 |
+
This is an SFT(Supervised Fine-Tuned) Model meant for SQl-based text generation tasks using the LoRa(Low-Ranking Adaptors) method.
|
29 |
+
|
30 |
+
|
31 |
## Model Summary
|
32 |
|
33 |
train/loss : 0.4354
|
|
|
75 |
### Framework versions
|
76 |
|
77 |
|
78 |
+
- PEFT 0.4.0
|