Include compressed versions of the CoreML versions of large-v3 model.
Browse filesI have uploaded the updated large-v3 CoreML model.
https://github.com/ggerganov/whisper.cpp/issues/1437#issuecomment-1807608344
It seems to be working properly according to the operation check logs.
> whisper_init_state: loading Core ML model from 'models/ggml-large-v3-encoder.mlmodelc'
```
./main -m models/ggml-large-v3.bin -f samples/jfk.wav
whisper_init_from_file_with_params_no_state: loading model from 'models/ggml-large-v3.bin'
whisper_model_load: loading model
whisper_model_load: n_vocab = 51866
whisper_model_load: n_audio_ctx = 1500
whisper_model_load: n_audio_state = 1280
whisper_model_load: n_audio_head = 20
whisper_model_load: n_audio_layer = 32
whisper_model_load: n_text_ctx = 448
whisper_model_load: n_text_state = 1280
whisper_model_load: n_text_head = 20
whisper_model_load: n_text_layer = 32
whisper_model_load: n_mels = 128
whisper_model_load: ftype = 1
whisper_model_load: qntvr = 0
whisper_model_load: type = 5 (large v3)
whisper_model_load: adding 1609 extra tokens
whisper_model_load: n_langs = 100
whisper_backend_init: using Metal backend
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M2
ggml_metal_init: picking default device: Apple M2
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: loading '/Users/solaoi/Projects/solaoi/whisper.cpp/ggml-metal.metal'
ggml_metal_init: GPU name: Apple M2
ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008)
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 17179.89 MB
ggml_metal_init: maxTransferRate = built-in GPU
ggml_metal_add_buffer: allocated 'backend ' buffer, size = 3117.88 MB, ( 3118.40 / 17179.89)
whisper_model_load: Metal buffer size = 3117.87 MB
whisper_model_load: model size = 3117.39 MB
whisper_backend_init: using Metal backend
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M2
ggml_metal_init: picking default device: Apple M2
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: loading '/Users/solaoi/Projects/solaoi/whisper.cpp/ggml-metal.metal'
ggml_metal_init: GPU name: Apple M2
ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008)
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 17179.89 MB
ggml_metal_init: maxTransferRate = built-in GPU
ggml_metal_add_buffer: allocated 'backend ' buffer, size = 220.20 MB, ( 3338.60 / 17179.89)
whisper_init_state: kv self size = 220.20 MB
ggml_metal_add_buffer: allocated 'backend ' buffer, size = 245.76 MB, ( 3584.36 / 17179.89)
whisper_init_state: kv cross size = 245.76 MB
whisper_init_state: loading Core ML model from 'models/ggml-large-v3-encoder.mlmodelc'
whisper_init_state: first run on a device may take a while ...
whisper_init_state: Core ML model loaded
ggml_metal_add_buffer: allocated 'backend ' buffer, size = 0.02 MB, ( 3593.88 / 17179.89)
whisper_init_state: compute buffer (conv) = 10.85 MB
ggml_metal_add_buffer: allocated 'backend ' buffer, size = 0.02 MB, ( 3593.90 / 17179.89)
whisper_init_state: compute buffer (cross) = 9.32 MB
ggml_metal_add_buffer: allocated 'backend ' buffer, size = 0.02 MB, ( 3593.91 / 17179.89)
whisper_init_state: compute buffer (decode) = 99.17 MB
ggml_metal_add_buffer: allocated 'backend ' buffer, size = 9.22 MB, ( 3603.14 / 17179.89)
ggml_metal_add_buffer: allocated 'backend ' buffer, size = 7.68 MB, ( 3610.82 / 17179.89)
ggml_metal_add_buffer: allocated 'backend ' buffer, size = 97.53 MB, ( 3708.35 / 17179.89)
system_info: n_threads = 4 / 8 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | METAL = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | CUDA = 0 | COREML = 1 | OPENVINO = 0 |
main: processing 'samples/jfk.wav' (176000 samples, 11.0 sec), 4 threads, 1 processors, 5 beams + best of 5, lang = en, task = transcribe, timestamps = 1 ...
[00:00:00.300 --> 00:00:09.000] And so, my fellow Americans, ask not what your country can do for you, ask what you
[00:00:09.000 --> 00:00:11.000] can do for your country.
whisper_print_timings: load time = 984.30 ms
whisper_print_timings: fallbacks = 0 p / 0 h
whisper_print_timings: mel time = 7.56 ms
whisper_print_timings: sample time = 81.95 ms / 148 runs ( 0.55 ms per run)
whisper_print_timings: encode time = 1999.00 ms / 1 runs ( 1999.00 ms per run)
whisper_print_timings: decode time = 0.00 ms / 1 runs ( 0.00 ms per run)
whisper_print_timings: batchd time = 1377.15 ms / 146 runs ( 9.43 ms per run)
whisper_print_timings: prompt time = 0.00 ms / 1 runs ( 0.00 ms per run)
whisper_print_timings: total time = 5798.28 ms
ggml_metal_free: deallocating
ggml_metal_free: deallocating
```
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:47837be7594a29429ec08620043390c4d6d467f8bd362df09e9390ace76a55a4
|
3 |
+
size 1175711232
|