--- language: - en - ko pipeline_tag: text-generation inference: false tags: - facebook - meta - pytorch - llama - llama-2 - llama-2-chat license: apache-2.0 library_name: peft --- # komt-Llama-2-13b-hf-ggml https://github.com/davidkim205/komt This model quantized the [korean Llama 2 13B](https://huggingface.co/davidkim205/komt-Llama-2-13b-hf) using [llama.cpp](https://github.com/ggerganov/llama.cpp) to 4-bit quantization. Our model, being in the same format as TheBloke's ggml, supports the following libraries or UI. The following content references [TheBloke/Llama-2-13B-chat-GGML](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML#metas-llama-2-13b-chat-ggml). GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as: * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with full GPU acceleration out of the box. Especially good for story telling. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with GPU acceleration via the c_transformers backend. * [LM Studio](https://lmstudio.ai/), a fully featured local GUI. Supports full GPU accel on macOS. Also supports Windows, without GPU accel. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Requires extra steps to enable GPU accel via llama.cpp backend. * [ctransformers](https://github.com/marella/ctransformers), a Python library with LangChain support and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with OpenAI-compatible API server. ## Model Details * **Model Developers** : davidkim(changyeon kim) * **Repository** : https://github.com/davidkim205/komt * **quant methods** : q4_0, q4_1, q5_0, q5_1, q2_k, q3_k, q3_k_m, q3_k_l, q4_k, q4_k_s, q4_k_m, q5_k, q5_k_s, q5_k_m, q8_0, q4_0 ## Prompt Template ``` ### instruction: {prompt} ### Response: ``` Examples: ``` ### instruction: 자동차 종합(정기)검사 의무기간은 얼마인가요? ### Response: ``` response: ``` ### instruction: 자동차 종합(정기)검사 의무기간은 얼마인가요? ### Response:자동차 종합(정기)검사는 2년 1991년 7월 1일에 고시된 '자동차 보험료 조정기준'에서 취리로부터 제정된 기준 상 경량 살수차를 제외한 자동차 모든 승용자동차는 2년마다 필요하다. 이 법은 차량에 관계없이 2년마다 정기검사를 해야한다고 규제했다. ``` ## Usage When using the original [llama.cpp](https://github.com/ggerganov/llama.cpp) ``` make -j && ./main -m models/komt-Llama-2-13b-hf-ggml/ggml-model-q8_0.bin -p "### instruction: 영화 해리포터 시리즈 배급사가 어디야\n\n### Response:" ``` When using the modified llama.cpp for korean multi-task (recommended): Refer https://github.com/davidkim205/komt/tree/main/llama.cpp ``` make -j && ./main -m models/komt-Llama-2-13b-hf-ggml/ggml-model-q8_0.bin -p "영화 해리포터 시리즈 배급사가 어디야" ``` response: ``` $ make -j && ./main -m models/komt-Llama-2-13b-hf-ggml/ggml-model-q8_0.bin -p "영화 해리포터 시리즈 배급사가 어디야" I llama.cpp build info: I UNAME_S: Linux I UNAME_P: x86_64 I UNAME_M: x86_64 I CFLAGS: -I. -O3 -std=c11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS I CXXFLAGS: -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS I LDFLAGS: I CC: cc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 I CXX: g++ (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 make: Nothing to be done for 'default'. main: build = 6 (01a61bf) main: seed = 1692190774 llama.cpp: loading model from models/komt-Llama-2-13b-hf-ggml/ggml-model-q8_0.bin llama_model_load_internal: format = ggjt v3 (latest) llama_model_load_internal: n_vocab = 32000 llama_model_load_internal: n_ctx = 512 llama_model_load_internal: n_embd = 5120 llama_model_load_internal: n_mult = 6912 llama_model_load_internal: n_head = 40 llama_model_load_internal: n_head_kv = 40 llama_model_load_internal: n_layer = 40 llama_model_load_internal: n_rot = 128 llama_model_load_internal: n_gqa = 1 llama_model_load_internal: rnorm_eps = 5.0e-06 llama_model_load_internal: n_ff = 13824 llama_model_load_internal: freq_base = 10000.0 llama_model_load_internal: freq_scale = 1 llama_model_load_internal: ftype = 7 (mostly Q8_0) llama_model_load_internal: model size = 13B llama_model_load_internal: ggml ctx size = 0.11 MB llama_model_load_internal: mem required = 13152.13 MB (+ 400.00 MB per state) llama_new_context_with_model: kv self size = 400.00 MB llama_new_context_with_model: compute buffer total size = 75.35 MB system_info: n_threads = 8 / 16 | AVX = 1 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 | sampling: repeat_last_n = 64, repeat_penalty = 1.100000, presence_penalty = 0.000000, frequency_penalty = 0.000000, top_k = 40, tfs_z = 1.000000, top_p = 0.950000, typical_p = 1.000000, temp = 0.800000, mirostat = 0, mirostat_lr = 0.100000, mirostat_ent = 5.000000 generate: n_ctx = 512, n_batch = 512, n_predict = -1, n_keep = 0 ### instruction: 영화 해리포터 시리즈 배급사가 어디야 ### Response:워너 브라더스 해리포터(Harry Potter)는 J. K. 롤링이 쓴 판타지 소설이다. 1997년부터 2007년까지 총 7권으로 발행되었고, 전 세계적으로 많은 인기를 끌었다. 영국에서는 블룸버그(Bloomsbury), 미국에서는 워너 브라더스(Warner Brothers)가 각각 출판하였다. 현재 전 세계적으로 2억 4,000만 부 이상의 판매고를 올리고 있으며, 전 세계 대부분의 문학가들에게 영향을 주었다. ### check_end_of_text [end of text] llama_print_timings: load time = 801.73 ms llama_print_timings: sample time = 108.54 ms / 308 runs ( 0.35 ms per token, 2837.66 tokens per second) llama_print_timings: prompt eval time = 2651.47 ms / 43 tokens ( 61.66 ms per token, 16.22 tokens per second) llama_print_timings: eval time = 120629.25 ms / 307 runs ( 392.93 ms per token, 2.54 tokens per second) llama_print_timings: total time = 123440.86 ms ```