Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama2-13b-dpo-v3 - GGUF - Model creator: https://huggingface.co/mncai/ - Original model: https://huggingface.co/mncai/llama2-13b-dpo-v3/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama2-13b-dpo-v3.Q2_K.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q2_K.gguf) | Q2_K | 4.6GB | | [llama2-13b-dpo-v3.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.IQ3_XS.gguf) | IQ3_XS | 5.08GB | | [llama2-13b-dpo-v3.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.IQ3_S.gguf) | IQ3_S | 5.36GB | | [llama2-13b-dpo-v3.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q3_K_S.gguf) | Q3_K_S | 5.36GB | | [llama2-13b-dpo-v3.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.IQ3_M.gguf) | IQ3_M | 5.66GB | | [llama2-13b-dpo-v3.Q3_K.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q3_K.gguf) | Q3_K | 5.99GB | | [llama2-13b-dpo-v3.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q3_K_M.gguf) | Q3_K_M | 5.99GB | | [llama2-13b-dpo-v3.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q3_K_L.gguf) | Q3_K_L | 6.54GB | | [llama2-13b-dpo-v3.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.IQ4_XS.gguf) | IQ4_XS | 6.63GB | | [llama2-13b-dpo-v3.Q4_0.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q4_0.gguf) | Q4_0 | 6.95GB | | [llama2-13b-dpo-v3.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.IQ4_NL.gguf) | IQ4_NL | 7.0GB | | [llama2-13b-dpo-v3.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q4_K_S.gguf) | Q4_K_S | 7.01GB | | [llama2-13b-dpo-v3.Q4_K.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q4_K.gguf) | Q4_K | 7.42GB | | [llama2-13b-dpo-v3.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q4_K_M.gguf) | Q4_K_M | 7.42GB | | [llama2-13b-dpo-v3.Q4_1.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q4_1.gguf) | Q4_1 | 7.71GB | | [llama2-13b-dpo-v3.Q5_0.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q5_0.gguf) | Q5_0 | 8.46GB | | [llama2-13b-dpo-v3.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q5_K_S.gguf) | Q5_K_S | 8.46GB | | [llama2-13b-dpo-v3.Q5_K.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q5_K.gguf) | Q5_K | 8.7GB | | [llama2-13b-dpo-v3.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q5_K_M.gguf) | Q5_K_M | 8.7GB | | [llama2-13b-dpo-v3.Q5_1.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q5_1.gguf) | Q5_1 | 9.21GB | | [llama2-13b-dpo-v3.Q6_K.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q6_K.gguf) | Q6_K | 10.06GB | | [llama2-13b-dpo-v3.Q8_0.gguf](https://huggingface.co/RichardErkhov/mncai_-_llama2-13b-dpo-v3-gguf/blob/main/llama2-13b-dpo-v3.Q8_0.gguf) | Q8_0 | 13.03GB | Original model description: --- license: cc-by-nc-sa-4.0 language: - en - ko --- # Model Card for llama2-dpo-v3 ### Introduction of MindsAndCompany https://mnc.ai/ We develop a diverse range of AI models and craft solutions tailored for business applications. In the realm of generative AI, our product development includes the Code Assistant, the TOD Chatbot, and LLMOps. We are also actively working on the development of Enterprise AGI (Artificial General Intelligence). ### Model Summary based beomi/llama-2-koen-13b, instruction tuned and dpo. ### How to Use Here give some examples of how to use our model. ```python from transformers import AutoConfig, AutoModel, AutoTokenizer import transformers import torch hf_model = 'mncai/llama2-13b-dpo-v3' message = "<|user|>\n두 개의 구가 있는데 각각 지름이 1, 2일때 구의 부피는 몇배 차이가 나지? 설명도 같이 해줘.\n<|assistant|>\n" sequences = pipeline( message, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, max_length=2048, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ### LICENSE Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License, under LLAMA 2 COMMUNITY LICENSE AGREEMENT ### Contact If you have any questions, please raise an issue or contact us at dwmyoung@mnc.ai