metadata
license: mit
datasets:
- prabinpanta0/Rep00Zon
language:
- en
metrics:
- accuracy
pipeline_tag: question-answering
tags:
- general_knowledge
- Question_Answers
ZenGQ - BERT for Question Answering
This is a fine-tuned BERT model for question answering tasks, trained on a custom dataset.
Model Details
- Model: BERT-base-uncased
- Task: Question Answering
- Dataset: Rep00Zon
Usage
Load the model
from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline
# Load the tokenizer and model from Hugging Face
tokenizer = AutoTokenizer.from_pretrained("prabinpanta0/ZenGQ")
model = AutoModelForQuestionAnswering.from_pretrained("prabinpanta0/ZenGQ")
# Create a pipeline for question answering
qa_pipeline = pipeline("question-answering", model=model, tokenizer=tokenizer)
# Define your context and questions
contexts = ["Berlin is the capital of Germany.",
"Paris is the capital of France.",
"Madrid is the capital of Spain."]
questions = [
"What is the capital of Germany?",
"Which city is the capital of France?",
"What is the capital of Spain?"
]
# Get answers
for context, question in zip(contexts, questions):
result = qa_pipeline(question=question, context=context)
print(f"Question: {question}")
print(f"Answer: {result['answer']}\n")
Training Details
- Epochs: 3
- Training Loss: 2.050335, 1.345047, 1.204442
Token
text = "Berlin is the capital of Germany. Paris is the capital of France. Madrid is the capital of Spain."
tokens = tokenizer.tokenize(text)
print(tokens)
Output:
['berlin', 'is', 'the', 'capital', 'of', 'germany', '.', 'paris', 'is', 'the', 'capital', 'of', 'france', '.', 'madrid', 'is', 'the', 'capital', 'of', 'spain', '.']
Dataset
The model was trained on the Rep00Zon dataset.
License
This model is licensed under the MIT License.