afrideva commited on
Commit
3e5ef2f
1 Parent(s): 2ff7ce7

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +87 -0
README.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: masakhane/African-ultrachat-alpaca
3
+ datasets:
4
+ - masakhane/african-ultrachat
5
+ - untrachat_en
6
+ - sd
7
+ inference: true
8
+ license: gemma
9
+ model-index:
10
+ - name: zephyr-7b-gemma-sft-african-ultraalpaca
11
+ results: []
12
+ model_creator: masakhane
13
+ model_name: African-ultrachat-alpaca
14
+ pipeline_tag: text-generation
15
+ quantized_by: afrideva
16
+ tags:
17
+ - alignment-handbook
18
+ - trl
19
+ - sft
20
+ - generated_from_trainer
21
+ - trl
22
+ - sft
23
+ - generated_from_trainer
24
+ - gguf
25
+ - ggml
26
+ - quantized
27
+ ---
28
+
29
+ # African-ultrachat-alpaca-GGUF
30
+
31
+ Quantized GGUF model files for [African-ultrachat-alpaca](https://huggingface.co/masakhane/African-ultrachat-alpaca) from [masakhane](https://huggingface.co/masakhane)
32
+
33
+ ## Original Model Card:
34
+
35
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
36
+ should probably proofread and complete it, then remove this comment. -->
37
+
38
+ # zephyr-7b-gemma-sft-african-ultraalpaca
39
+
40
+ This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b)
41
+
42
+ ## Model description
43
+
44
+ More information needed
45
+
46
+ ## Intended uses & limitations
47
+
48
+ More information needed
49
+
50
+ ## Training and evaluation data
51
+
52
+ More information needed
53
+
54
+ ## Training procedure
55
+
56
+ ### Training hyperparameters
57
+
58
+ The following hyperparameters were used during training:
59
+ - learning_rate: 1e-05
60
+ - train_batch_size: 1
61
+ - eval_batch_size: 1
62
+ - seed: 42
63
+ - distributed_type: multi-GPU
64
+ - num_devices: 8
65
+ - gradient_accumulation_steps: 2
66
+ - total_train_batch_size: 16
67
+ - total_eval_batch_size: 8
68
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
69
+ - lr_scheduler_type: cosine
70
+ - lr_scheduler_warmup_ratio: 0.1
71
+ - num_epochs: 3
72
+
73
+ ### Training results
74
+
75
+ | Training Loss | Epoch | Step | Validation Loss |
76
+ |:-------------:|:-----:|:-----:|:---------------:|
77
+ | 1.0034 | 1.0 | 23628 | 1.0630 |
78
+ | 0.6403 | 2.0 | 47257 | 0.8788 |
79
+ | 0.2976 | 3.0 | 70884 | 0.8875 |
80
+
81
+
82
+ ### Framework versions
83
+
84
+ - Transformers 4.39.0.dev0
85
+ - Pytorch 2.2.1+cu121
86
+ - Datasets 2.14.6
87
+ - Tokenizers 0.15.2