Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

ko-solar-10.7b-v0.7 - GGUF

Original model description:

library_name: transformers license: apache-2.0 language: - ko

usage


from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

path = "mssma/ko-solar-10.7b-v0.7"
model = AutoModelForCausalLM.from_pretrained(
        path,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(path)
Downloads last month
22
GGUF
Model size
10.9B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .