prabinpanta0 commited on
Commit
92986d8
verified
1 Parent(s): a6cde4d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -8
README.md CHANGED
@@ -29,18 +29,34 @@ This is a fine-tuned BERT model for question answering tasks, trained on a custo
29
  ```python
30
  from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline
31
 
32
- tokenizer = AutoTokenizer.from_pretrained("prabinpanta0/ZenGQ")
33
- model = AutoModelForQuestionAnswering.from_pretrained("prabinpanta0/ZenGQ")
 
34
 
 
35
  qa_pipeline = pipeline("question-answering", model=model, tokenizer=tokenizer)
36
 
37
- context = "Berlin is the capital of Germany."
38
- question = "What is the capital of Germany?"
39
-
40
- result = qa_pipeline(question=question, context=context)
41
- print(f"Answer: {result['answer']}")
 
 
 
 
 
 
 
 
42
  ```
43
 
44
  ### Training Details
45
  - Epochs: 3
46
- - Training Loss: 2.050335, 1.345047, 1.204442
 
 
 
 
 
 
 
29
  ```python
30
  from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline
31
 
32
+ # Load a pretrained tokenizer and model from Hugging Face
33
+ tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased-distilled-squad")
34
+ model = AutoModelForQuestionAnswering.from_pretrained("distilbert-base-uncased-distilled-squad")
35
 
36
+ # Create a pipeline for question answering
37
  qa_pipeline = pipeline("question-answering", model=model, tokenizer=tokenizer)
38
 
39
+ # Define your context and questions
40
+ context = "Berlin is the capital of Germany. Paris is the capital of France. Madrid is the capital of Spain."
41
+ questions = [
42
+ "What is the capital of Germany?",
43
+ "Which city is the capital of France?",
44
+ "What is the capital of Spain?"
45
+ ]
46
+
47
+ # Get answers
48
+ for question in questions:
49
+ result = qa_pipeline(question=question, context=context)
50
+ print(f"Question: {question}")
51
+ print(f"Answer: {result['answer']}\n")
52
  ```
53
 
54
  ### Training Details
55
  - Epochs: 3
56
+ - Training Loss: 2.050335, 1.345047, 1.204442
57
+
58
+ ### Dataset
59
+ The model was trained on the [Rep00Zon](https://huggingface.co/datasets/prabinpanta0/Rep00Zon) dataset.
60
+
61
+ ### License
62
+ This model is licensed under the MIT License.