RichardErkhov commited on
Commit
5d136fa
โ€ข
1 Parent(s): 2e14461

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +89 -0
README.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ llama2-13b-dpo-v3 - GGUF
11
+ - Model creator: https://huggingface.co/mncai/
12
+ - Original model: https://huggingface.co/mncai/llama2-13b-dpo-v3/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [llama2-13b-dpo-v3.Q2_K.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q2_K.gguf) | Q2_K | 4.6GB |
18
+ | [llama2-13b-dpo-v3.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.IQ3_XS.gguf) | IQ3_XS | 5.08GB |
19
+ | [llama2-13b-dpo-v3.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.IQ3_S.gguf) | IQ3_S | 5.36GB |
20
+ | [llama2-13b-dpo-v3.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q3_K_S.gguf) | Q3_K_S | 5.36GB |
21
+ | [llama2-13b-dpo-v3.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.IQ3_M.gguf) | IQ3_M | 5.66GB |
22
+ | [llama2-13b-dpo-v3.Q3_K.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q3_K.gguf) | Q3_K | 5.99GB |
23
+ | [llama2-13b-dpo-v3.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q3_K_M.gguf) | Q3_K_M | 5.99GB |
24
+ | [llama2-13b-dpo-v3.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q3_K_L.gguf) | Q3_K_L | 6.54GB |
25
+ | [llama2-13b-dpo-v3.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.IQ4_XS.gguf) | IQ4_XS | 6.63GB |
26
+ | [llama2-13b-dpo-v3.Q4_0.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q4_0.gguf) | Q4_0 | 6.95GB |
27
+ | [llama2-13b-dpo-v3.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.IQ4_NL.gguf) | IQ4_NL | 7.0GB |
28
+ | [llama2-13b-dpo-v3.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q4_K_S.gguf) | Q4_K_S | 7.01GB |
29
+ | [llama2-13b-dpo-v3.Q4_K.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q4_K.gguf) | Q4_K | 7.42GB |
30
+ | [llama2-13b-dpo-v3.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q4_K_M.gguf) | Q4_K_M | 7.42GB |
31
+ | [llama2-13b-dpo-v3.Q4_1.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q4_1.gguf) | Q4_1 | 7.71GB |
32
+ | [llama2-13b-dpo-v3.Q5_0.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q5_0.gguf) | Q5_0 | 8.46GB |
33
+ | [llama2-13b-dpo-v3.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q5_K_S.gguf) | Q5_K_S | 8.46GB |
34
+ | [llama2-13b-dpo-v3.Q5_K.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q5_K.gguf) | Q5_K | 8.7GB |
35
+ | [llama2-13b-dpo-v3.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q5_K_M.gguf) | Q5_K_M | 8.7GB |
36
+ | [llama2-13b-dpo-v3.Q5_1.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q5_1.gguf) | Q5_1 | 9.21GB |
37
+ | [llama2-13b-dpo-v3.Q6_K.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q6_K.gguf) | Q6_K | 10.06GB |
38
+ | [llama2-13b-dpo-v3.Q8_0.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q8_0.gguf) | Q8_0 | 13.03GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ license: cc-by-nc-sa-4.0
46
+ language:
47
+ - en
48
+ - ko
49
+ ---
50
+ # Model Card for llama2-dpo-v3
51
+
52
+ ### Introduction of MindsAndCompany
53
+
54
+ https://mnc.ai/
55
+
56
+ We develop a diverse range of AI models and craft solutions tailored for business applications. In the realm of generative AI, our product development includes the Code Assistant, the TOD Chatbot, and LLMOps. We are also actively working on the development of Enterprise AGI (Artificial General Intelligence).
57
+
58
+ ### Model Summary
59
+ based beomi/llama-2-koen-13b, instruction tuned and dpo.
60
+
61
+
62
+ ### How to Use
63
+ Here give some examples of how to use our model.
64
+
65
+ ```python
66
+ from transformers import AutoConfig, AutoModel, AutoTokenizer
67
+ import transformers
68
+ import torch
69
+ hf_model = 'mncai/llama2-13b-dpo-v3'
70
+ message = "<|user|>\n๋‘ ๊ฐœ์˜ ๊ตฌ๊ฐ€ ์žˆ๋Š”๋ฐ ๊ฐ๊ฐ ์ง€๋ฆ„์ด 1, 2์ผ๋•Œ ๊ตฌ์˜ ๋ถ€ํ”ผ๋Š” ๋ช‡๋ฐฐ ์ฐจ์ด๊ฐ€ ๋‚˜์ง€? ์„ค๋ช…๋„ ๊ฐ™์ด ํ•ด์ค˜.\n<|assistant|>\n"
71
+
72
+ sequences = pipeline(
73
+ message,
74
+ do_sample=True,
75
+ top_k=10,
76
+ num_return_sequences=1,
77
+ eos_token_id=tokenizer.eos_token_id,
78
+ max_length=2048,
79
+ )
80
+ for seq in sequences:
81
+ print(f"Result: {seq['generated_text']}")
82
+ ```
83
+
84
+ ### LICENSE
85
+ Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License, under LLAMA 2 COMMUNITY LICENSE AGREEMENT
86
+
87
+ ### Contact
88
+ If you have any questions, please raise an issue or contact us at dwmyoung@mnc.ai
89
+