Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Model Card for vpgits/Mistral-7B-v0.1-qagen-v2.1-AWQ

vpgits/Mistral-7B-v0.1-qagen-v2.1-AWQ is an experimental model aimed at using LLMs to create MCQ questions in the JSON format by only providing the source text

Model Description

Model Sources [optional]

Uses

Generation of MCQ questions as JSON by providing source text

How to Get Started with the Model

Use the code below to get started with the model.

from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer

model = AutoAWQForCausalLM.from_pretrained("vpgits/Mistral-7B-v0.1-qagen-v2.1-AWQ", fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained("vpgits/Mistral-7B-v0.1-qagen-v2.1-AWQ", trust_remote_code=True)

#preprocess input text
tokens = tokenizer(
    text=eval_prompt, 
    return_tensors='pt'
).input_ids.cuda()


# Generate output
generation_output = model.generate(
    tokens, 
    max_new_tokens=512
)
Downloads last month
0
Safetensors
Model size
1.2B params
Tensor type
I32
·
FP16
·