Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

AISquare-Instruct-llama2-koen-13b-v0.9.24 - GGUF

Original model description:

language: - en pipeline_tag: text-generation license: cc-by-nc-4.0

AISquare-Instruct-llama2-koen-13b-v0.9.24

Model Details

Developed by Inswave Systems UI Platform Team

Method
Using DPO method and SFT method

Hardware
We utilized an A100x4 * 1 for training our model

Base Model beomi/llama2-koen-13b

Implementation Code

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "inswave/AISquare-Instruct-llama2-koen-13b-v0.9.24"
model = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)

Downloads last month
119
GGUF
Model size
13.2B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .