Edit model card

Model Card for Model ID

The model "BaoHuynh2002/finetuned_Qwen2_7b_mt_history_mcqas_v1" is designed to answer and explain history multiple-choice questions (MCQs). It leverages a fine-tuned version of the Qwen-2-7b model, optimized specifically for history-related MCQs covering grades 10, 11, and 12. The model is capable of not only selecting the correct answer but also providing detailed explanations derived from reliable sources, such as high school history textbooks.

Model Details

  • Base Model: Qwen-2-7b
  • Fine-tuning Dataset: BaoHuynh2002/11k_History_MCQAs_gen_Explain
  • Parameters: 7 billion

Model Description

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

  • Finetuned by: Bao Huynh and colleagues Duc Tho and Viet Sang
  • Model type: Sequence-to-Sequence Language Model (Qwen-2-7b)
  • Language(s) (NLP): Vietnamese
  • Finetuned from model [optional]: Qwen-2-7b

Direct Use

The model is intended to be used directly for answering history multiple-choice questions and providing detailed explanations. It can be utilized in various educational contexts, such as:

  • Students: High school students can use the model to assist with homework, study for exams, and deepen their understanding of historical events.
  • Teachers: Educators can use the model to generate practice questions, explanations, and additional study materials for their students.
  • Educational Platforms: Developers of educational apps and websites can integrate the model to provide interactive learning experiences and instant feedback for users.

Prompt:

def formatting_prompt(example):
    text = "### QUESTION: {}\n{}\n### ANSWER:".format(example['question'], '\n'.join(example['answers']))
    return text
Downloads last month
14
Safetensors
Model size
7.62B params
Tensor type
FP16
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.