mlabonne commited on
Commit
414e0ac
1 Parent(s): 7b17ba0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -17
README.md CHANGED
@@ -7,10 +7,11 @@ base_model: ai21labs/Jamba-v0.1
7
  model-index:
8
  - name: out
9
  results: []
 
 
10
  ---
11
 
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
14
 
15
  [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
16
  <details><summary>See axolotl config</summary>
@@ -95,24 +96,10 @@ special_tokens:
95
 
96
  </details><br>
97
 
98
- # out
99
-
100
- This model is a fine-tuned version of [ai21labs/Jamba-v0.1](https://huggingface.co/ai21labs/Jamba-v0.1) on the None dataset.
101
  It achieves the following results on the evaluation set:
102
  - Loss: 0.9573
103
 
104
- ## Model description
105
-
106
- More information needed
107
-
108
- ## Intended uses & limitations
109
-
110
- More information needed
111
-
112
- ## Training and evaluation data
113
-
114
- More information needed
115
-
116
  ## Training procedure
117
 
118
  ### Training hyperparameters
 
7
  model-index:
8
  - name: out
9
  results: []
10
+ datasets:
11
+ - chargoddard/Open-Platypus-Chat
12
  ---
13
 
14
+ # Jambatypus
 
15
 
16
  [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
17
  <details><summary>See axolotl config</summary>
 
96
 
97
  </details><br>
98
 
99
+ This model is a fine-tuned version of [ai21labs/Jamba-v0.1](https://huggingface.co/ai21labs/Jamba-v0.1) on the [chargoddard/Open-Platypus-Chat](https://huggingface.co/datasets/chargoddard/Open-Platypus-Chat) dataset.
 
 
100
  It achieves the following results on the evaluation set:
101
  - Loss: 0.9573
102
 
 
 
 
 
 
 
 
 
 
 
 
 
103
  ## Training procedure
104
 
105
  ### Training hyperparameters