Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

llama2-13b-dpo-v4 - GGUF

Original model description:

license: cc-by-nc-sa-4.0 language: - en - ko

Model Card for llama2-13b-dpo-v4

Introduction of MindsAndCompany

https://mnc.ai/

We create various AI models and develop solutions that can be applied to businesses. And as for generative AI, we are developing products like Code Assistant, TOD Chatbot, LLMOps, and are in the process of developing Enterprise AGI (Artificial General Intelligence).

Model Summary

based beomi/llama-2-koen-13b, instruction tuned and dpo.

How to Use

Here give some examples of how to use our model.

from transformers import AutoConfig, AutoModel, AutoTokenizer
import transformers
import torch
hf_model = 'mncai/llama2-13b-dpo-v4' 
message = "<|user|>\n๋‘ ๊ฐœ์˜ ๊ตฌ๊ฐ€ ์žˆ๋Š”๋ฐ ๊ฐ๊ฐ ์ง€๋ฆ„์ด 1, 2์ผ๋•Œ ๊ตฌ์˜ ๋ถ€ํ”ผ๋Š” ๋ช‡๋ฐฐ ์ฐจ์ด๊ฐ€ ๋‚˜์ง€? ์„ค๋ช…๋„ ๊ฐ™์ด ํ•ด์ค˜.\n<|assistant|>\n"

sequences = pipeline(
    message,
    do_sample=True,
    top_k=10,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
    max_length=2048,
)
for seq in sequences:
    print(f"Result: {seq['generated_text']}")

LICENSE

Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License, under LLAMA 2 COMMUNITY LICENSE AGREEMENT

Contact

If you have any questions, please raise an issue or contact us at dwmyoung@mnc.ai

Downloads last month
60
GGUF

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .