davidkim205 commited on
Commit
be5502f
β€’
1 Parent(s): eca0655

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -5
README.md CHANGED
@@ -2,7 +2,6 @@
2
  language:
3
  - en
4
  - ko
5
- model_type: llama
6
  pipeline_tag: text-generation
7
  inference: false
8
  tags:
@@ -12,10 +11,13 @@ tags:
12
  - llama
13
  - llama-2
14
  - llama-2-chat
15
- - ggml
16
  license: apache-2.0
 
17
  ---
18
- # komt-llama-2-7b-ggml
 
 
 
19
  This model quantized the [korean Llama 2 7B-chat](https://huggingface.co/davidkim205/komt-Llama-2-7b-chat-hf) using [llama.cpp](https://github.com/ggerganov/llama.cpp) to 4-bit quantization.
20
 
21
 
@@ -56,7 +58,8 @@ response:
56
  ```
57
  ### instruction: μžλ™μ°¨ μ’…ν•©(μ •κΈ°)검사 μ˜λ¬΄κΈ°κ°„μ€ μ–Όλ§ˆμΈκ°€μš”?
58
 
59
- ### Response: μžλ™μ°¨ μ’…ν•©(μ •κΈ°)검사 μ˜λ¬΄κΈ°κ°„μ€ 2λ…„μž…λ‹ˆλ‹€. 이 κΈ°κ°„ λ™μ•ˆ 검사λ₯Ό 받지 μ•ŠμœΌλ©΄ κ³Όνƒœλ£Œκ°€ λΆ€κ³Όλ©λ‹ˆλ‹€. μžλ™μ°¨ μ’…ν•©(μ •κΈ°)검사 μ˜λ¬΄κΈ°κ°„μ€ 2013λ…„ 12μ›” 31일뢀터 μ‹œν–‰λ˜μ—ˆμŠ΅λ‹ˆλ‹€
 
60
  ```
61
 
62
 
@@ -118,4 +121,4 @@ generate: n_ctx = 512, n_batch = 512, n_predict = -1, n_keep = 0
118
  ### instruction: λˆ„μ „μ°¨λ‹¨κΈ°κ°€ λ‚΄λ €κ°€λŠ” μ΄μœ λŠ” λ¬΄μ—‡μž…λ‹ˆκΉŒ?
119
 
120
  ### Response:λˆ„μ „μ°¨λ‹¨κΈ°κ°€ λ‚΄λ €κ°€λŠ” μ΄μœ λŠ” λ‹€μŒκ³Ό κ°™μŠ΅λ‹ˆλ‹€:1. κ³ μž₯μ΄λ‚˜ μ˜€μž‘λ™ 확인: λˆ„μ „μ°¨λ‹¨κΈ°κ°€ λͺ‡ μ°¨λ‘€ λ“€μ–΄μ˜€λ©΄ κ³ μž₯이 λ‚˜κ±°λ‚˜ μ˜€μž‘λ™μ„ λ°©μ§€ν•˜λŠ” 데 도움이 λ©λ‹ˆλ‹€.2. λˆ„μ „ 사고 ν”Όν•΄: λ§Žμ€ λˆ„μ „μ°¨λ‹¨κΈ°κ°€ λ‚΄λ €κ°€λ©΄ μ§€μ—­μ—μ„œ μΌμ–΄λ‚˜λŠ” λˆ„μ „ μ‚¬κ³ μ˜ 영ν–₯을 μ€„μ΄λŠ” κ²ƒμœΌλ‘œ λ‚˜νƒ€λ‚¬μŠ΅λ‹ˆλ‹€.3. μ•ˆμ •μ„±: λˆ„μ „μ°¨λ‹¨κΈ°κ°€ λ‚΄λ €κ°€λ©΄ μ „λ°˜μ μΈ μ•ˆμ •μ„±μ΄ ν–₯μƒλ©λ‹ˆλ‹€.
121
- ```
 
2
  language:
3
  - en
4
  - ko
 
5
  pipeline_tag: text-generation
6
  inference: false
7
  tags:
 
11
  - llama
12
  - llama-2
13
  - llama-2-chat
 
14
  license: apache-2.0
15
+ library_name: peft
16
  ---
17
+ # komt-Llama-2-7b-chat-hf-ggml
18
+
19
+ https://github.com/davidkim205/komt
20
+
21
  This model quantized the [korean Llama 2 7B-chat](https://huggingface.co/davidkim205/komt-Llama-2-7b-chat-hf) using [llama.cpp](https://github.com/ggerganov/llama.cpp) to 4-bit quantization.
22
 
23
 
 
58
  ```
59
  ### instruction: μžλ™μ°¨ μ’…ν•©(μ •κΈ°)검사 μ˜λ¬΄κΈ°κ°„μ€ μ–Όλ§ˆμΈκ°€μš”?
60
 
61
+ ### Response:μžλ™μ°¨ μ’…ν•©(μ •κΈ°)κ²€μ‚¬λŠ” 2λ…„
62
+ 1991λ…„ 7μ›” 1일에 κ³ μ‹œλœ 'μžλ™μ°¨ λ³΄ν—˜λ£Œ μ‘°μ •κΈ°μ€€'μ—μ„œ μ·¨λ¦¬λ‘œλΆ€ν„° μ œμ •λœ κΈ°μ€€ 상 κ²½λŸ‰ μ‚΄μˆ˜μ°¨λ₯Ό μ œμ™Έν•œ μžλ™μ°¨ λͺ¨λ“  μŠΉμš©μžλ™μ°¨λŠ” 2λ…„λ§ˆλ‹€ ν•„μš”ν•˜λ‹€. 이 법은 μ°¨λŸ‰μ— 관계없이 2λ…„λ§ˆλ‹€ 정기검사λ₯Ό ν•΄μ•Όν•œλ‹€κ³  κ·œμ œν–ˆλ‹€.
63
  ```
64
 
65
 
 
121
  ### instruction: λˆ„μ „μ°¨λ‹¨κΈ°κ°€ λ‚΄λ €κ°€λŠ” μ΄μœ λŠ” λ¬΄μ—‡μž…λ‹ˆκΉŒ?
122
 
123
  ### Response:λˆ„μ „μ°¨λ‹¨κΈ°κ°€ λ‚΄λ €κ°€λŠ” μ΄μœ λŠ” λ‹€μŒκ³Ό κ°™μŠ΅λ‹ˆλ‹€:1. κ³ μž₯μ΄λ‚˜ μ˜€μž‘λ™ 확인: λˆ„μ „μ°¨λ‹¨κΈ°κ°€ λͺ‡ μ°¨λ‘€ λ“€μ–΄μ˜€λ©΄ κ³ μž₯이 λ‚˜κ±°λ‚˜ μ˜€μž‘λ™μ„ λ°©μ§€ν•˜λŠ” 데 도움이 λ©λ‹ˆλ‹€.2. λˆ„μ „ 사고 ν”Όν•΄: λ§Žμ€ λˆ„μ „μ°¨λ‹¨κΈ°κ°€ λ‚΄λ €κ°€λ©΄ μ§€μ—­μ—μ„œ μΌμ–΄λ‚˜λŠ” λˆ„μ „ μ‚¬κ³ μ˜ 영ν–₯을 μ€„μ΄λŠ” κ²ƒμœΌλ‘œ λ‚˜νƒ€λ‚¬μŠ΅λ‹ˆλ‹€.3. μ•ˆμ •μ„±: λˆ„μ „μ°¨λ‹¨κΈ°κ°€ λ‚΄λ €κ°€λ©΄ μ „λ°˜μ μΈ μ•ˆμ •μ„±μ΄ ν–₯μƒλ©λ‹ˆλ‹€.
124
+ ```