File size: 1,206 Bytes
65b6917 93c230f 1fff23f 93c230f 65b6917 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
---
language:
- ko
- en
license: apache-2.0
tags:
- text-generation
- qwen2.5
- korean
- instruct
pipeline_tag: text-generation
---
## π Notice
- β
Original model is [beomi/Qwen2.5-7B-Instruct-kowiki-qa](https://huggingface.co/beomi/Qwen2.5-7B-Instruct-kowiki-qa)
- β
Quantized by [teddylee777](https://huggingface.co/teddylee777) by using [llama.cpp](https://github.com/ggerganov/llama.cpp)
## π¬ Template
```
FROM Qwen2.5-7B-Instruct-kowiki-qa-Q8_0.gguf
TEMPLATE """{{- if .System }}
<|im_start|>system
{{ .System }}
<|im_end|>
{{- end }}
<|im_start|>user
{{ .Prompt }}
<|im_end|>
<|im_start|>assistant
"""
SYSTEM """You are Qwen, created by Alibaba Cloud. You are a helpful assistant. λͺ¨λ λλ΅μ νκ΅μ΄λ‘ ν΄μ£ΌμΈμ."""
PARAMETER temperature 0
PARAMETER num_ctx 128000
PARAMETER stop <|im_start|>
PARAMETER stop <|im_end|>
```
## π§βπ» Helpful Contents
- β
[How to load HuggingFace GGUF into LM Studio](https://youtu.be/bANQk--Maxs)
- β
[How to test llama3 by using Ollama](https://youtu.be/12CuUQIPdM4)
- π°π· [LangChain Tutorial in Korean](https://wikidocs.net/book/14314)
- π₯ Please subscribe and support on [YouTube](https://www.youtube.com/@teddynote)
|