Update README.md
Browse files
README.md
CHANGED
@@ -11,4 +11,8 @@ Base Model: Solar-10.7B
|
|
11 |
|
12 |
Fine-Tuning Technique: Low-Rank Adaptation (LoRA)
|
13 |
|
14 |
-
Description This model is based on the Solar-10.7B, a state-of-the-art language model, and has been fine-tuned using the LoRA. LoRA allows for efficient fine-tuning with fewer parameters, making the model more adaptable and faster to deploy for specific tasks.
|
|
|
|
|
|
|
|
|
|
11 |
|
12 |
Fine-Tuning Technique: Low-Rank Adaptation (LoRA)
|
13 |
|
14 |
+
Description This model is based on the Solar-10.7B, a state-of-the-art language model, and has been fine-tuned using the LoRA. LoRA allows for efficient fine-tuning with fewer parameters, making the model more adaptable and faster to deploy for specific tasks.
|
15 |
+
|
16 |
+
Train GPU: A100
|
17 |
+
|
18 |
+
Training Time: 15 hours
|