mav23 commited on
Commit
1cbf58f
1 Parent(s): cdcd2aa

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +69 -0
  3. mistrilitary-7b.Q4_0.gguf +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ mistrilitary-7b.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: Heralax/army-pretrain-1
5
+ tags:
6
+ - generated_from_trainer
7
+ model-index:
8
+ - name: us-army-finetune-1
9
+ results: []
10
+ ---
11
+
12
+ Was torn between calling it MiLLM and Mistrillitary. *Sigh* naming is one of the two great problems in computer science...
13
+
14
+ This is a domain-expert finetune based on the US Army field manuals (the ones that are published and available for civvies like me). It's focused on factual question answer only, but seems to be able to answer slightly deeper questions in a pinch.
15
+
16
+ ## Model Quirks
17
+
18
+ - I had to focus on the army field manuals because the armed forces publishes a truly massive amount of text.
19
+ - No generalist assistant data was included, which means this is very very very focused on QA, and may be inflexible.
20
+ - Experimental change: data was mostly generated by a smaller model, Mistral NeMo. Quality seems unaffected, costs are much lower. Had problems with the open-ended questions not being in the right format.
21
+ - Low temperture recommended. Screenshots use 0.
22
+ - ChatML
23
+ - No special tokens added.
24
+
25
+ Examples:
26
+
27
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64825ebceb4befee377cf8ac/KakWvjSMwSHkISPGoB0RH.png))
28
+
29
+
30
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64825ebceb4befee377cf8ac/7rlJxcjGECqFuEFmYC3aV.png)
31
+
32
+
33
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64825ebceb4befee377cf8ac/mzxk9Qa9cveFx7PArnAmB.png)
34
+
35
+
36
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64825ebceb4befee377cf8ac/2KtpGhqReVPj4Wh3fles5.png)
37
+
38
+
39
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64825ebceb4befee377cf8ac/Pz70D922utg5ZZCqYiGpT.png)
40
+
41
+ ## Training procedure
42
+
43
+ ### Training hyperparameters
44
+
45
+ The following hyperparameters were used during training:
46
+ - learning_rate: 2e-05
47
+ - train_batch_size: 2
48
+ - eval_batch_size: 1
49
+ - seed: 42
50
+ - distributed_type: multi-GPU
51
+ - num_devices: 5
52
+ - gradient_accumulation_steps: 6
53
+ - total_train_batch_size: 60
54
+ - total_eval_batch_size: 5
55
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
56
+ - lr_scheduler_type: cosine
57
+ - lr_scheduler_warmup_steps: 48
58
+ - num_epochs: 6
59
+
60
+ ### Training results
61
+
62
+ It answers questions alright.
63
+
64
+ ### Framework versions
65
+
66
+ - Transformers 4.45.0
67
+ - Pytorch 2.3.1+cu121
68
+ - Datasets 2.21.0
69
+ - Tokenizers 0.20.0
mistrilitary-7b.Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:495f2911f95d7ab2c9975e81db8df07b520625530ffe8090489c649951bca695
3
+ size 4108923328