Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Run the model

Instruction format

The template used to build a prompt for this Instruct model is defined as follows:

### USER:
{instruction1}
### RESPONSE:
{respone1}
### USER:
{instruction2}
### RESPONSE:
{respone2}

Run the model with the transformers library:

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "tktung/MultiSV_Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id,
                                             device_map="auto",
                                             dtype=torch.float16 # optional, load in 16-bit precision mode to reduce memory usage
                                            )
model.eval()

def make_prompt(instruction):
    return f"""### USER:
{instruction}
### RESPONSE:
"""

user_input = "Känner du till WARA M&L?"
input_prompt = make_prompt(user_input)
input_ids = tokenizer(input_prompt, return_tensors="pt")["input_ids"]
generated_token_ids = model.generate(
    inputs=input_ids,
    max_new_tokens=100,
    do_sample=True,
    temperature=0.6,
    top_p=1,
)[0]
generated_text = tokenizer.decode(generated_token_ids)

Retrieval Augmented Generation

The model was trained with the following prompt format for RAG:

Vietnamese:

### USER:
Sử dụng ngữ cảnh sau để trả lời câu hỏi ở cuối:
{context}
Câu hỏi: {human_prompt}
### RESPONSE:

Swedish:

### USER:
Använd följande sammanhang för att svara på frågan:
{context}
Fråga: {human_prompt}
### RESPONSE:
Downloads last month
7
Safetensors
Model size
46.7B params
Tensor type
BF16
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.