Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

mistral-ko-7b-tech - GGUF

Original model description:

language: - ko pipeline_tag: text-generation tags: - finetune license: other

Model Card for mistral-ko-7b-tech

It is a fine-tuned model using Korean in the mistral-7b model.

Model Details

  • Model Developers : shleeeee(Seunghyeon Lee), oopsung(Sungwoo Park)
  • Repository : To be added
  • Model Architecture : The mistral-ko-7b-wiki-neft is is a fine-tuned version of the Mistral-7B-v0.1.
  • Lora target modules : q_proj, k_proj, v_proj, o_proj,gate_proj
  • train_batch : 4
  • Max_step : 500

Dataset

Korean Custom Dataset(2000)

Prompt template: Mistral

<s>[INST]{['instruction']}[/INST]{['output']}</s>

Usage

# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("shleeeee/mistral-ko-7b-tech")
model = AutoModelForCausalLM.from_pretrained("shleeeee/mistral-ko-7b-tech")

# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="shleeeee/mistral-ko-7b-tech")

Evaluation

image/png

Downloads last month
225
GGUF
Model size
7.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .