File size: 2,026 Bytes
121ee05
 
 
 
 
 
 
 
 
 
 
 
 
 
a6cde4d
121ee05
a6cde4d
 
 
 
 
 
e7bf5a3
a6cde4d
 
 
 
 
 
 
 
56bb402
 
 
a6cde4d
92986d8
a6cde4d
 
92986d8
56bb402
 
 
92986d8
 
 
 
 
 
 
56bb402
92986d8
 
 
a6cde4d
 
 
 
92986d8
 
e7bf5a3
 
 
 
 
 
 
 
92986d8
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
---
license: mit
datasets:
- prabinpanta0/Rep00Zon
language:
- en
metrics:
- accuracy
pipeline_tag: question-answering
tags:
- general_knowledge
- 'Question_Answers'
---

# ZenGQ - BERT for Question Answering

This is a fine-tuned BERT model for question answering tasks, trained on a custom dataset.

## Model Details

- **Model:** BERT-base-uncased
- **Task:** Question Answering
- **Dataset:** [Rep00Zon](https://huggingface.co/datasets/prabinpanta0/Rep00Zon)

## Usage

### Load the model

```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline

# Load the tokenizer and model from Hugging Face
tokenizer = AutoTokenizer.from_pretrained("prabinpanta0/ZenGQ")
model = AutoModelForQuestionAnswering.from_pretrained("prabinpanta0/ZenGQ")

# Create a pipeline for question answering
qa_pipeline = pipeline("question-answering", model=model, tokenizer=tokenizer)

# Define your context and questions
contexts = ["Berlin is the capital of Germany.",
          "Paris is the capital of France.",
          "Madrid is the capital of Spain."]
questions = [
    "What is the capital of Germany?",
    "Which city is the capital of France?",
    "What is the capital of Spain?"
]

# Get answers
for context, question in zip(contexts, questions):
    result = qa_pipeline(question=question, context=context)
    print(f"Question: {question}")
    print(f"Answer: {result['answer']}\n")
```

### Training Details
- Epochs: 3
- Training Loss: 2.050335, 1.345047, 1.204442

### Token
```
text = "Berlin is the capital of Germany. Paris is the capital of France. Madrid is the capital of Spain."
tokens = tokenizer.tokenize(text)
print(tokens)
```
*Output:*
```['berlin', 'is', 'the', 'capital', 'of', 'germany', '.', 'paris', 'is', 'the', 'capital', 'of', 'france', '.', 'madrid', 'is', 'the', 'capital', 'of', 'spain', '.']```
### Dataset
The model was trained on the [Rep00Zon](https://huggingface.co/datasets/prabinpanta0/Rep00Zon) dataset.

### License
This model is licensed under the MIT License.