kalyanmaram commited on
Commit
3ddf02c
1 Parent(s): 69bdf72

End of training

Browse files
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: bigscience-bloom-rail-1.0
3
+ base_model: bigscience/bloom-560m
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: UNH-QA-bloom-560m-v3
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # UNH-QA-bloom-560m-v3
15
+
16
+ This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on an unknown dataset.
17
+
18
+ ## Model description
19
+
20
+ More information needed
21
+
22
+ ## Intended uses & limitations
23
+
24
+ More information needed
25
+
26
+ ## Training and evaluation data
27
+
28
+ More information needed
29
+
30
+ ## Training procedure
31
+
32
+ ### Training hyperparameters
33
+
34
+ The following hyperparameters were used during training:
35
+ - learning_rate: 2e-05
36
+ - train_batch_size: 2
37
+ - eval_batch_size: 8
38
+ - seed: 42
39
+ - gradient_accumulation_steps: 4
40
+ - total_train_batch_size: 8
41
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
+ - lr_scheduler_type: linear
43
+ - num_epochs: 2
44
+ - mixed_precision_training: Native AMP
45
+
46
+ ### Training results
47
+
48
+
49
+
50
+ ### Framework versions
51
+
52
+ - Transformers 4.35.2
53
+ - Pytorch 2.1.0+cu118
54
+ - Datasets 2.15.0
55
+ - Tokenizers 0.15.0
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "bigscience/bloom-560m",
3
+ "apply_residual_connection_post_layernorm": false,
4
+ "architectures": [
5
+ "BloomForCausalLM"
6
+ ],
7
+ "attention_dropout": 0.0,
8
+ "attention_softmax_in_fp32": true,
9
+ "bias_dropout_fusion": true,
10
+ "bos_token_id": 1,
11
+ "eos_token_id": 2,
12
+ "hidden_dropout": 0.0,
13
+ "hidden_size": 1024,
14
+ "initializer_range": 0.02,
15
+ "layer_norm_epsilon": 1e-05,
16
+ "masked_softmax_fusion": true,
17
+ "model_type": "bloom",
18
+ "n_head": 16,
19
+ "n_inner": null,
20
+ "n_layer": 24,
21
+ "offset_alibi": 100,
22
+ "pad_token_id": 3,
23
+ "pretraining_tp": 1,
24
+ "skip_bias_add": true,
25
+ "skip_bias_add_qkv": false,
26
+ "slow_but_exact": false,
27
+ "torch_dtype": "float32",
28
+ "transformers_version": "4.35.2",
29
+ "unk_token_id": 0,
30
+ "use_cache": true,
31
+ "vocab_size": 250880
32
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "pad_token_id": 3,
6
+ "transformers_version": "4.35.2"
7
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a5f8526f8b4fe06b3c7170a20e9199d7f272ec22c5b8da1a6bd95ea2b1a8bfc5
3
+ size 2236892304
runs/Dec05_19-48-54_93de9a45161d/events.out.tfevents.1701805745.93de9a45161d.605.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0fdd552154befad96e0f10d215eb5d02eaa3104d25b9b6ece2fb989743928c2e
3
+ size 4351
runs/Dec05_19-53-05_93de9a45161d/events.out.tfevents.1701805986.93de9a45161d.605.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b78fb0495a10be5c3c2ff29c74336ec70e1199851eb75248f55d7ea3523ce277
3
+ size 4351
runs/Dec05_20-01-38_93de9a45161d/events.out.tfevents.1701806499.93de9a45161d.605.2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17e15cfdadc7dee7901ad3dc4f8870eccae33df1bd074537d1b95a2a9c0902b6
3
+ size 4351
runs/Dec05_20-01-45_93de9a45161d/events.out.tfevents.1701806505.93de9a45161d.605.3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:426f02a70cdcb2775e549fc5e896007d379b86e519c5d03c44490931b4b8dda0
3
+ size 4351
runs/Dec05_20-02-36_93de9a45161d/events.out.tfevents.1701806557.93de9a45161d.605.4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:55eec2c435b712da9c353593c3bd59ad69fe15ec6d0b4b28c67fc9e94c212645
3
+ size 4328
runs/Dec05_20-07-12_93de9a45161d/events.out.tfevents.1701806833.93de9a45161d.605.5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c530b452c4f8205dd7c107ee47dd6aea26964ea493b6205bcef52d77cadf8f0e
3
+ size 4328
runs/Dec05_20-26-19_93de9a45161d/events.out.tfevents.1701807980.93de9a45161d.605.6 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:768b19fbf03ccfa31a88954a46d95fabaaa298c2763bba5825d5301a3a7f4567
3
+ size 4676
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8991387c19862a8784285b36e9079e04d17cd31e75180cdaf39c3a32fa3f812b
3
+ size 4536