brandolorian commited on
Commit
a4df772
1 Parent(s): 97d632c

End of training

Browse files
README.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ base_model: Qwen/Qwen1.5-0.5B
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: answer-Qwen-stioning
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # answer-Qwen-stioning
15
+
16
+ This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B) on the None dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - eval_loss: 2.6400
19
+ - eval_runtime: 68.7183
20
+ - eval_samples_per_second: 178.744
21
+ - eval_steps_per_second: 22.352
22
+ - epoch: 3.0
23
+ - step: 9213
24
+
25
+ ## Model description
26
+
27
+ More information needed
28
+
29
+ ## Intended uses & limitations
30
+
31
+ More information needed
32
+
33
+ ## Training and evaluation data
34
+
35
+ More information needed
36
+
37
+ ## Training procedure
38
+
39
+ ### Training hyperparameters
40
+
41
+ The following hyperparameters were used during training:
42
+ - learning_rate: 2e-05
43
+ - train_batch_size: 16
44
+ - eval_batch_size: 8
45
+ - seed: 42
46
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
+ - lr_scheduler_type: linear
48
+ - num_epochs: 9
49
+ - mixed_precision_training: Native AMP
50
+
51
+ ### Framework versions
52
+
53
+ - Transformers 4.38.0.dev0
54
+ - Pytorch 2.1.0+cu121
55
+ - Datasets 2.17.0
56
+ - Tokenizers 0.15.2
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "eos_token_id": 151643,
4
+ "max_new_tokens": 2048,
5
+ "transformers_version": "4.38.0.dev0"
6
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:95bc2d75b9e1e274ad98382036f9c40951fd158f4a4d674d5f60c3fd7c5a1b49
3
  size 1855983640
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd5cf8b0666cc281fcbe36f06b6430412c56e5e6a8b2ea63b46394377c403a32
3
  size 1855983640
runs/Feb19_03-47-35_020d1a1e0686/events.out.tfevents.1708314456.020d1a1e0686.980.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0091dab303c39f9c2f514c020099154fed4a820382e9ecc97f124a745debf17d
3
- size 7060
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1699038897dc7559ca27b4b5bf75eb11de3d33dfc1efdefd95b09b0417824f15
3
+ size 8273
tmp-checkpoint-9213/config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "Qwen/Qwen1.5-0.5B",
3
+ "architectures": [
4
+ "Qwen2ForCausalLM"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 151643,
8
+ "eos_token_id": 151643,
9
+ "hidden_act": "silu",
10
+ "hidden_size": 1024,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 2816,
13
+ "max_position_embeddings": 32768,
14
+ "max_window_layers": 21,
15
+ "model_type": "qwen2",
16
+ "num_attention_heads": 16,
17
+ "num_hidden_layers": 24,
18
+ "num_key_value_heads": 16,
19
+ "rms_norm_eps": 1e-06,
20
+ "rope_theta": 1000000.0,
21
+ "sliding_window": 32768,
22
+ "tie_word_embeddings": true,
23
+ "torch_dtype": "float32",
24
+ "transformers_version": "4.38.0.dev0",
25
+ "use_cache": true,
26
+ "use_sliding_window": false,
27
+ "vocab_size": 151936
28
+ }
tmp-checkpoint-9213/generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "eos_token_id": 151643,
4
+ "max_new_tokens": 2048,
5
+ "transformers_version": "4.38.0.dev0"
6
+ }
tmp-checkpoint-9213/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd5cf8b0666cc281fcbe36f06b6430412c56e5e6a8b2ea63b46394377c403a32
3
+ size 1855983640
tmp-checkpoint-9213/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4ed2e00af91fc86d02179e2909420d6530265d4053af9d0fa48986cf80185c23
3
+ size 622406692
tmp-checkpoint-9213/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:156ef38c462ff12e8551beb130fa94f60ec351a654b354c44c750fc1b4965b67
3
+ size 4920