YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
agiin-11.1B-v0.0 - GGUF
- Model creator: https://huggingface.co/mncai/
- Original model: https://huggingface.co/mncai/agiin-11.1B-v0.0/
Name | Quant method | Size |
---|---|---|
agiin-11.1B-v0.0.Q2_K.gguf | Q2_K | 3.88GB |
agiin-11.1B-v0.0.IQ3_XS.gguf | IQ3_XS | 4.31GB |
agiin-11.1B-v0.0.IQ3_S.gguf | IQ3_S | 4.54GB |
agiin-11.1B-v0.0.Q3_K_S.gguf | Q3_K_S | 4.52GB |
agiin-11.1B-v0.0.IQ3_M.gguf | IQ3_M | 4.69GB |
agiin-11.1B-v0.0.Q3_K.gguf | Q3_K | 5.03GB |
agiin-11.1B-v0.0.Q3_K_M.gguf | Q3_K_M | 5.03GB |
agiin-11.1B-v0.0.Q3_K_L.gguf | Q3_K_L | 5.48GB |
agiin-11.1B-v0.0.IQ4_XS.gguf | IQ4_XS | 5.64GB |
agiin-11.1B-v0.0.Q4_0.gguf | Q4_0 | 5.88GB |
agiin-11.1B-v0.0.IQ4_NL.gguf | IQ4_NL | 5.95GB |
agiin-11.1B-v0.0.Q4_K_S.gguf | Q4_K_S | 5.93GB |
agiin-11.1B-v0.0.Q4_K.gguf | Q4_K | 6.26GB |
agiin-11.1B-v0.0.Q4_K_M.gguf | Q4_K_M | 6.26GB |
agiin-11.1B-v0.0.Q4_1.gguf | Q4_1 | 6.53GB |
agiin-11.1B-v0.0.Q5_0.gguf | Q5_0 | 7.17GB |
agiin-11.1B-v0.0.Q5_K_S.gguf | Q5_K_S | 7.17GB |
agiin-11.1B-v0.0.Q5_K.gguf | Q5_K | 7.36GB |
agiin-11.1B-v0.0.Q5_K_M.gguf | Q5_K_M | 7.36GB |
agiin-11.1B-v0.0.Q5_1.gguf | Q5_1 | 7.81GB |
agiin-11.1B-v0.0.Q6_K.gguf | Q6_K | 8.53GB |
agiin-11.1B-v0.0.Q8_0.gguf | Q8_0 | 11.05GB |
Original model description:
license: apache-2.0 language: - en
Model Card for mncai/agiin-11.1B-v0.0
Introduction of MindsAndCompany
We create various AI models and develop solutions that can be applied to businesses. And as for generative AI, we are developing products like Code Assistant, TOD Chatbot, LLMOps, and are in the process of developing Enterprise AGI (Artificial General Intelligence).
Model Summary
based mistral arch. pretrain, instruction tuned and dpo.
How to Use
Here give some examples of how to use our model.
from transformers import AutoConfig, AutoModel, AutoTokenizer
import transformers
import torch
hf_model = 'mncai/agiin-11.1B-v0.0'
message = "<|user|>\n๋ ๊ฐ์ ๊ตฌ๊ฐ ์๋๋ฐ ๊ฐ๊ฐ ์ง๋ฆ์ด 1, 2์ผ๋ ๊ฐ ๊ตฌ์ ๋ถํผ๋ ๋ช๋ฐฐ์ผ? ์ค๋ช
๋ ๊ฐ์ด ํด์ค.\n<|assistant|>\n"
sequences = pipeline(
message,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=2048,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
Contact
If you have any questions, please raise an issue or contact us at dwmyoung@mnc.ai
- Downloads last month
- 200