Model Card for vpgits/Mistral-7B-v0.1-qagen-v2.1-AWQ
vpgits/Mistral-7B-v0.1-qagen-v2.1-AWQ is an experimental model aimed at using LLMs to create MCQ questions in the JSON format by only providing the source text
Model Description
- Developed by: vpgits
- Language(s) (NLP): English
- License: MIT
- Finetuned from model : mistralai/Mistral-7B-Instruct-v0.1
Model Sources [optional]
- Repository: click here
- Demo [optional]: click_here
Uses
Generation of MCQ questions as JSON by providing source text
How to Get Started with the Model
Use the code below to get started with the model.
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model = AutoAWQForCausalLM.from_pretrained("vpgits/Mistral-7B-v0.1-qagen-v2.1-AWQ", fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained("vpgits/Mistral-7B-v0.1-qagen-v2.1-AWQ", trust_remote_code=True)
#preprocess input text
tokens = tokenizer(
text=eval_prompt,
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
tokens,
max_new_tokens=512
)
- Downloads last month
- 0
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.