Update README.md
Browse files
README.md
CHANGED
@@ -29,7 +29,7 @@ tokenizer = transformers.AutoTokenizer.from_pretrained("kleinay/qanom-seq2seq-mo
|
|
29 |
```
|
30 |
|
31 |
However, the model fine-tuning procedure involves input preprocessing (marking the predicate in the sentence, T5's "task prefix", incorporating the predicate type and/or the verbal for of the nominalization) and output postprocessing (parsing the sequence into a list of QASRL-formatted QAs).
|
32 |
-
In order to use the model for QANom parsing easily, we suggest downloading the `pipeline.py` file from this repository, and then use the `QASRL_Pipeline` class:
|
33 |
|
34 |
```python
|
35 |
from pipeline import QASRL_Pipeline
|
|
|
29 |
```
|
30 |
|
31 |
However, the model fine-tuning procedure involves input preprocessing (marking the predicate in the sentence, T5's "task prefix", incorporating the predicate type and/or the verbal for of the nominalization) and output postprocessing (parsing the sequence into a list of QASRL-formatted QAs).
|
32 |
+
In order to use the model for QANom parsing easily, we suggest downloading the [`pipeline.py`](https://huggingface.co/kleinay/qanom-seq2seq-model-baseline/blob/main/pipeline.py) file from this repository, and then use the `QASRL_Pipeline` class:
|
33 |
|
34 |
```python
|
35 |
from pipeline import QASRL_Pipeline
|