Edit model card

Haary/haryra-7b-mistral-id

merak

Haary/haryra-7b-mistral-id is QLoRA quantized version of Ichsan2895/Merak-7B-v4

Install the necessary packages

Requires: Transformers from source - only needed for versions <= v4.34.

# Install transformers from source - only needed for versions <= v4.34
!pip install git+https://github.com/huggingface/transformers.git
!pip install accelerate

Example Python code

import torch
from transformers import pipeline

pipe = pipeline("text-generation", model="Haary/haryra-7b-mistral-id", torch_dtype=torch.bfloat16, device_map="auto")

# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
    {
        "role": "system",
        "content": "Anda adalah chatbot ramah yang selalu merespons dengan singkat dan jelas",
    },
    {"role": "user", "content": "Apa bedanya antara raspberry pi dan esp32?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])

Credits

Ichsan2895/Merak-7B-v4 for base model.

image source

pixabay.com

Downloads last month
5
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.