Update README.md
Browse files
README.md
CHANGED
@@ -22,10 +22,10 @@ As Aloobun's model is well performing and impressive on it's own, I decided to a
|
|
22 |
|
23 |
### Direct Use
|
24 |
|
25 |
-
Chat
|
26 |
-
Conversational
|
27 |
-
Text Generation
|
28 |
-
Function Calling
|
29 |
|
30 |
## Bias, Risks, and Limitations
|
31 |
|
@@ -62,9 +62,11 @@ Use at your own risk. It's a great small model, owing to the base model before t
|
|
62 |
### Training Procedure
|
63 |
|
64 |
[LaserRMT](https://github.com/cognitivecomputations/laserRMT) was used to refine the weights, using the 16 highest scored weights specifically by noise-to-ratio analysis.
|
|
|
65 |
This technique avoids training unnecessarily low-performng weights that can turn to garbage. By pruning these weights, the model size is decreased slightly.
|
66 |
|
67 |
![axolotl](https://github.com/OpenAccess-AI-Collective/axolotl/blob/main/image/axolotl-badge-web.png?raw=true)
|
|
|
68 |
Axolotl was used for training and dataset tokenization.
|
69 |
|
70 |
#### Preprocessing [optional]
|
|
|
22 |
|
23 |
### Direct Use
|
24 |
|
25 |
+
- Chat
|
26 |
+
- Conversational
|
27 |
+
- Text Generation
|
28 |
+
- Function Calling
|
29 |
|
30 |
## Bias, Risks, and Limitations
|
31 |
|
|
|
62 |
### Training Procedure
|
63 |
|
64 |
[LaserRMT](https://github.com/cognitivecomputations/laserRMT) was used to refine the weights, using the 16 highest scored weights specifically by noise-to-ratio analysis.
|
65 |
+
|
66 |
This technique avoids training unnecessarily low-performng weights that can turn to garbage. By pruning these weights, the model size is decreased slightly.
|
67 |
|
68 |
![axolotl](https://github.com/OpenAccess-AI-Collective/axolotl/blob/main/image/axolotl-badge-web.png?raw=true)
|
69 |
+
|
70 |
Axolotl was used for training and dataset tokenization.
|
71 |
|
72 |
#### Preprocessing [optional]
|