dvijay commited on
Commit
772ca35
1 Parent(s): cadb18a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -16
README.md CHANGED
@@ -8,29 +8,15 @@ model-index:
8
  results: []
9
  ---
10
 
11
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
- should probably proofread and complete it, then remove this comment. -->
13
-
14
  [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
15
  # qlora-out
16
 
17
- This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
  - Loss: 0.8850
20
 
21
- ## Model description
22
-
23
- More information needed
24
-
25
- ## Intended uses & limitations
26
-
27
- More information needed
28
-
29
- ## Training and evaluation data
30
-
31
- More information needed
32
-
33
  ## Training procedure
 
34
 
35
  ### Training hyperparameters
36
 
 
8
  results: []
9
  ---
10
 
 
 
 
11
  [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
12
  # qlora-out
13
 
14
+ This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the mhenrichsen/alpaca_2k_test dataset.
15
  It achieves the following results on the evaluation set:
16
  - Loss: 0.8850
17
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  ## Training procedure
19
+ accelerate launch -m axolotl.cli.train examples/mistral/qlora.yml
20
 
21
  ### Training hyperparameters
22