legraphista commited on
Commit
94e9f67
1 Parent(s): 3a4b4dd

Upload imatrix.log with huggingface_hub

Browse files
Files changed (1) hide show
  1. imatrix.log +155 -0
imatrix.log ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ main: build = 3008 (1d8fca72)
2
+ main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
3
+ main: seed = 1716817514
4
+ llama_model_loader: loaded meta data with 22 key-value pairs and 219 tensors from internlm2-math-plus-1_8b-IMat-GGUF/internlm2-math-plus-1_8b.gguf (version GGUF V3 (latest))
5
+ llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
6
+ llama_model_loader: - kv 0: general.architecture str = internlm2
7
+ llama_model_loader: - kv 1: general.name str = InternLM2
8
+ llama_model_loader: - kv 2: internlm2.context_length u32 = 8192
9
+ llama_model_loader: - kv 3: internlm2.block_count u32 = 24
10
+ llama_model_loader: - kv 4: internlm2.embedding_length u32 = 2048
11
+ llama_model_loader: - kv 5: internlm2.feed_forward_length u32 = 8192
12
+ llama_model_loader: - kv 6: internlm2.rope.freq_base f32 = 1000000.000000
13
+ llama_model_loader: - kv 7: internlm2.attention.head_count u32 = 16
14
+ llama_model_loader: - kv 8: internlm2.attention.layer_norm_rms_epsilon f32 = 0.000010
15
+ llama_model_loader: - kv 9: internlm2.attention.head_count_kv u32 = 8
16
+ llama_model_loader: - kv 10: general.file_type u32 = 0
17
+ llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
18
+ llama_model_loader: - kv 12: tokenizer.ggml.pre str = default
19
+ llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,92544] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
20
+ llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,92544] = [0.000000, 0.000000, 0.000000, 0.0000...
21
+ llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,92544] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
22
+ llama_model_loader: - kv 16: tokenizer.ggml.add_space_prefix bool = false
23
+ llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1
24
+ llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2
25
+ llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 2
26
+ llama_model_loader: - kv 20: tokenizer.chat_template str = {{ bos_token }}{% for message in mess...
27
+ llama_model_loader: - kv 21: general.quantization_version u32 = 2
28
+ llama_model_loader: - type f32: 219 tensors
29
+ llm_load_vocab: mismatch in special tokens definition ( 405/92544 vs 259/92544 ).
30
+ llm_load_print_meta: format = GGUF V3 (latest)
31
+ llm_load_print_meta: arch = internlm2
32
+ llm_load_print_meta: vocab type = SPM
33
+ llm_load_print_meta: n_vocab = 92544
34
+ llm_load_print_meta: n_merges = 0
35
+ llm_load_print_meta: n_ctx_train = 8192
36
+ llm_load_print_meta: n_embd = 2048
37
+ llm_load_print_meta: n_head = 16
38
+ llm_load_print_meta: n_head_kv = 8
39
+ llm_load_print_meta: n_layer = 24
40
+ llm_load_print_meta: n_rot = 128
41
+ llm_load_print_meta: n_embd_head_k = 128
42
+ llm_load_print_meta: n_embd_head_v = 128
43
+ llm_load_print_meta: n_gqa = 2
44
+ llm_load_print_meta: n_embd_k_gqa = 1024
45
+ llm_load_print_meta: n_embd_v_gqa = 1024
46
+ llm_load_print_meta: f_norm_eps = 0.0e+00
47
+ llm_load_print_meta: f_norm_rms_eps = 1.0e-05
48
+ llm_load_print_meta: f_clamp_kqv = 0.0e+00
49
+ llm_load_print_meta: f_max_alibi_bias = 0.0e+00
50
+ llm_load_print_meta: f_logit_scale = 0.0e+00
51
+ llm_load_print_meta: n_ff = 8192
52
+ llm_load_print_meta: n_expert = 0
53
+ llm_load_print_meta: n_expert_used = 0
54
+ llm_load_print_meta: causal attn = 1
55
+ llm_load_print_meta: pooling type = 0
56
+ llm_load_print_meta: rope type = 0
57
+ llm_load_print_meta: rope scaling = linear
58
+ llm_load_print_meta: freq_base_train = 1000000.0
59
+ llm_load_print_meta: freq_scale_train = 1
60
+ llm_load_print_meta: n_yarn_orig_ctx = 8192
61
+ llm_load_print_meta: rope_finetuned = unknown
62
+ llm_load_print_meta: ssm_d_conv = 0
63
+ llm_load_print_meta: ssm_d_inner = 0
64
+ llm_load_print_meta: ssm_d_state = 0
65
+ llm_load_print_meta: ssm_dt_rank = 0
66
+ llm_load_print_meta: model type = ?B
67
+ llm_load_print_meta: model ftype = all F32
68
+ llm_load_print_meta: model params = 1.89 B
69
+ llm_load_print_meta: model size = 7.04 GiB (32.00 BPW)
70
+ llm_load_print_meta: general.name = InternLM2
71
+ llm_load_print_meta: BOS token = 1 '<s>'
72
+ llm_load_print_meta: EOS token = 2 '</s>'
73
+ llm_load_print_meta: UNK token = 0 '<unk>'
74
+ llm_load_print_meta: PAD token = 2 '</s>'
75
+ llm_load_print_meta: LF token = 13 '<0x0A>'
76
+ ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
77
+ ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
78
+ ggml_cuda_init: found 1 CUDA devices:
79
+ Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
80
+ llm_load_tensors: ggml ctx size = 0.22 MiB
81
+ llm_load_tensors: offloading 24 repeating layers to GPU
82
+ llm_load_tensors: offloading non-repeating layers to GPU
83
+ llm_load_tensors: offloaded 25/25 layers to GPU
84
+ llm_load_tensors: CPU buffer size = 723.00 MiB
85
+ llm_load_tensors: CUDA0 buffer size = 6483.38 MiB
86
+ ..................................................................................
87
+ llama_new_context_with_model: n_ctx = 512
88
+ llama_new_context_with_model: n_batch = 512
89
+ llama_new_context_with_model: n_ubatch = 512
90
+ llama_new_context_with_model: flash_attn = 0
91
+ llama_new_context_with_model: freq_base = 1000000.0
92
+ llama_new_context_with_model: freq_scale = 1
93
+ llama_kv_cache_init: CUDA0 KV buffer size = 48.00 MiB
94
+ llama_new_context_with_model: KV self size = 48.00 MiB, K (f16): 24.00 MiB, V (f16): 24.00 MiB
95
+ llama_new_context_with_model: CUDA_Host output buffer size = 0.35 MiB
96
+ llama_new_context_with_model: CUDA0 compute buffer size = 184.75 MiB
97
+ llama_new_context_with_model: CUDA_Host compute buffer size = 5.01 MiB
98
+ llama_new_context_with_model: graph nodes = 774
99
+ llama_new_context_with_model: graph splits = 2
100
+
101
+ system_info: n_threads = 25 / 32 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 |
102
+ compute_imatrix: tokenizing the input ..
103
+ compute_imatrix: tokenization took 149.089 ms
104
+ compute_imatrix: computing over 209 chunks with batch_size 512
105
+ compute_imatrix: 0.25 seconds per pass - ETA 0.87 minutes
106
+ [1]11.1312,[2]8.6363,[3]7.5192,[4]8.7685,[5]8.6457,[6]8.0250,[7]9.1250,[8]9.1274,[9]9.9765,
107
+ save_imatrix: stored collected data after 10 chunks in internlm2-math-plus-1_8b-IMat-GGUF/imatrix.dat
108
+ [10]10.3272,[11]11.1727,[12]11.3352,[13]12.9598,[14]13.4077,[15]14.4618,[16]15.1761,[17]15.8490,[18]14.8805,[19]15.3401,
109
+ save_imatrix: stored collected data after 20 chunks in internlm2-math-plus-1_8b-IMat-GGUF/imatrix.dat
110
+ [20]15.2807,[21]14.5097,[22]14.4950,[23]13.4617,[24]13.0109,[25]12.2200,[26]12.2099,[27]12.7869,[28]12.7652,[29]13.2829,
111
+ save_imatrix: stored collected data after 30 chunks in internlm2-math-plus-1_8b-IMat-GGUF/imatrix.dat
112
+ [30]13.7325,[31]13.6520,[32]12.9249,[33]12.4811,[34]12.2812,[35]12.2370,[36]12.0979,[37]12.4393,[38]12.8247,[39]13.0730,
113
+ save_imatrix: stored collected data after 40 chunks in internlm2-math-plus-1_8b-IMat-GGUF/imatrix.dat
114
+ [40]13.4472,[41]13.7132,[42]14.1578,[43]14.5769,[44]15.0185,[45]15.1708,[46]15.2992,[47]15.2611,[48]15.1091,[49]15.4024,
115
+ save_imatrix: stored collected data after 50 chunks in internlm2-math-plus-1_8b-IMat-GGUF/imatrix.dat
116
+ [50]15.7380,[51]15.8763,[52]16.2449,[53]16.2822,[54]16.5116,[55]16.7323,[56]17.0042,[57]17.1444,[58]17.3613,[59]17.3927,
117
+ save_imatrix: stored collected data after 60 chunks in internlm2-math-plus-1_8b-IMat-GGUF/imatrix.dat
118
+ [60]17.3337,[61]17.6352,[62]17.9523,[63]18.4370,[64]18.3717,[65]18.1018,[66]17.8581,[67]17.6437,[68]17.4301,[69]17.1963,
119
+ save_imatrix: stored collected data after 70 chunks in internlm2-math-plus-1_8b-IMat-GGUF/imatrix.dat
120
+ [70]17.0333,[71]16.9842,[72]16.7208,[73]16.3945,[74]16.5737,[75]16.7233,[76]16.7770,[77]16.7379,[78]16.9561,[79]17.0106,
121
+ save_imatrix: stored collected data after 80 chunks in internlm2-math-plus-1_8b-IMat-GGUF/imatrix.dat
122
+ [80]17.1238,[81]17.1064,[82]17.1041,[83]17.2104,[84]17.2325,[85]17.2776,[86]17.2832,[87]17.3172,[88]17.2650,[89]17.3473,
123
+ save_imatrix: stored collected data after 90 chunks in internlm2-math-plus-1_8b-IMat-GGUF/imatrix.dat
124
+ [90]17.4284,[91]17.5131,[92]17.4812,[93]17.3928,[94]17.2928,[95]17.2508,[96]17.1120,[97]17.1103,[98]17.0537,[99]16.9868,
125
+ save_imatrix: stored collected data after 100 chunks in internlm2-math-plus-1_8b-IMat-GGUF/imatrix.dat
126
+ [100]16.8327,[101]16.8049,[102]16.7018,[103]16.5576,[104]16.4292,[105]16.3660,[106]16.2770,[107]16.1450,[108]16.0753,[109]16.0705,
127
+ save_imatrix: stored collected data after 110 chunks in internlm2-math-plus-1_8b-IMat-GGUF/imatrix.dat
128
+ [110]16.0731,[111]15.9900,[112]15.9993,[113]16.0081,[114]15.9285,[115]15.8458,[116]15.9694,[117]15.9543,[118]15.9839,[119]15.8023,
129
+ save_imatrix: stored collected data after 120 chunks in internlm2-math-plus-1_8b-IMat-GGUF/imatrix.dat
130
+ [120]15.6437,[121]15.4694,[122]15.2755,[123]15.1011,[124]14.9437,[125]14.7883,[126]14.7195,[127]14.6397,[128]14.5264,[129]14.4086,
131
+ save_imatrix: stored collected data after 130 chunks in internlm2-math-plus-1_8b-IMat-GGUF/imatrix.dat
132
+ [130]14.3366,[131]14.2389,[132]14.1439,[133]14.0853,[134]13.9819,[135]13.8918,[136]13.8522,[137]13.7711,[138]13.6823,[139]13.6393,
133
+ save_imatrix: stored collected data after 140 chunks in internlm2-math-plus-1_8b-IMat-GGUF/imatrix.dat
134
+ [140]13.5538,[141]13.4733,[142]13.5984,[143]13.8155,[144]14.0751,[145]14.2823,[146]14.3138,[147]14.3540,[148]14.4225,[149]14.5055,
135
+ save_imatrix: stored collected data after 150 chunks in internlm2-math-plus-1_8b-IMat-GGUF/imatrix.dat
136
+ [150]14.5680,[151]14.5877,[152]14.6063,[153]14.7027,[154]14.7721,[155]14.8630,[156]14.8986,[157]14.9890,[158]15.0815,[159]15.1157,
137
+ save_imatrix: stored collected data after 160 chunks in internlm2-math-plus-1_8b-IMat-GGUF/imatrix.dat
138
+ [160]15.1994,[161]15.2529,[162]15.2996,[163]15.3659,[164]15.4310,[165]15.4594,[166]15.5148,[167]15.5710,[168]15.6024,[169]15.6428,
139
+ save_imatrix: stored collected data after 170 chunks in internlm2-math-plus-1_8b-IMat-GGUF/imatrix.dat
140
+ [170]15.6873,[171]15.7017,[172]15.7554,[173]15.8180,[174]15.8137,[175]15.9375,[176]16.0642,[177]16.2008,[178]16.3725,[179]16.4786,
141
+ save_imatrix: stored collected data after 180 chunks in internlm2-math-plus-1_8b-IMat-GGUF/imatrix.dat
142
+ [180]16.5336,[181]16.4838,[182]16.5160,[183]16.5718,[184]16.6541,[185]16.6811,[186]16.6915,[187]16.7102,[188]16.7611,[189]16.7740,
143
+ save_imatrix: stored collected data after 190 chunks in internlm2-math-plus-1_8b-IMat-GGUF/imatrix.dat
144
+ [190]16.7763,[191]16.8201,[192]16.8665,[193]16.9179,[194]16.8887,[195]16.9311,[196]16.9176,[197]16.9523,[198]16.9722,[199]17.0944,
145
+ save_imatrix: stored collected data after 200 chunks in internlm2-math-plus-1_8b-IMat-GGUF/imatrix.dat
146
+ [200]16.9780,[201]17.0414,[202]17.0243,[203]17.1840,[204]17.3468,[205]17.4907,[206]17.6111,[207]17.7104,[208]17.6297,[209]17.5631,
147
+ save_imatrix: stored collected data after 209 chunks in internlm2-math-plus-1_8b-IMat-GGUF/imatrix.dat
148
+
149
+ llama_print_timings: load time = 1311.79 ms
150
+ llama_print_timings: sample time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
151
+ llama_print_timings: prompt eval time = 32872.80 ms / 107008 tokens ( 0.31 ms per token, 3255.21 tokens per second)
152
+ llama_print_timings: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
153
+ llama_print_timings: total time = 35006.68 ms / 107009 tokens
154
+
155
+ Final estimate: PPL = 17.5631 +/- 0.23703