hosseinhimself
commited on
Commit
•
912ab00
1
Parent(s):
d4f82bf
Update README.md
Browse files
README.md
CHANGED
@@ -47,17 +47,29 @@ The model achieved the following results:
|
|
47 |
To use this model for question-answering tasks, load it with the `transformers` library:
|
48 |
|
49 |
```python
|
50 |
-
from transformers import
|
51 |
|
52 |
-
|
53 |
-
|
54 |
-
|
|
|
|
|
|
|
|
|
|
|
55 |
|
56 |
# Example usage
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
61 |
```
|
62 |
|
63 |
## Datasets
|
|
|
47 |
To use this model for question-answering tasks, load it with the `transformers` library:
|
48 |
|
49 |
```python
|
50 |
+
from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline
|
51 |
|
52 |
+
model = "hosseinhimself/tara-roberta-base-fa-qa"
|
53 |
+
|
54 |
+
# Load the tokenizer and model
|
55 |
+
tokenizer = AutoTokenizer.from_pretrained(model)
|
56 |
+
model = AutoModelForQuestionAnswering.from_pretrained(model)
|
57 |
+
|
58 |
+
# Create a QA pipeline
|
59 |
+
qa_pipeline = pipeline("question-answering", model=model, tokenizer=tokenizer)
|
60 |
|
61 |
# Example usage
|
62 |
+
context = "CONTEXT"
|
63 |
+
|
64 |
+
question = "QUESTION"
|
65 |
+
|
66 |
+
# Modify the pipeline to return more answers
|
67 |
+
results = qa_pipeline(question=question, context=context, top_k=5) # top_k specifies the number of answers
|
68 |
+
|
69 |
+
# Display the answers
|
70 |
+
for idx, result in enumerate(results):
|
71 |
+
print(f"Answer {idx+1}: {result['answer']}")
|
72 |
+
|
73 |
```
|
74 |
|
75 |
## Datasets
|