Spaces:
Running
Running
Yu (Hope) Hou
commited on
Commit
•
3759572
1
Parent(s):
3595547
add log info and clean up text
Browse files- src/about.py +5 -2
src/about.py
CHANGED
@@ -75,13 +75,16 @@ In our Adversarial Calibration QA task, we evaluate the QA model's reliability o
|
|
75 |
|
76 |
## FAQ
|
77 |
What if my system type is not specified here or not supported yet?
|
78 |
-
- Please
|
79 |
|
80 |
I don't understand where I could start to build a QA system for submission.
|
81 |
- Please check our submission tutorials. From there, you could fine-tune or do anything above the base models.
|
82 |
|
83 |
I want to use API-based QA systems for submission, like GPT4. What should I do?
|
84 |
- We don't support API-based models now but you could train your model with the GPT cache we provided: https://github.com/Pinafore/nlp-hw/tree/master/models.
|
|
|
|
|
|
|
85 |
"""
|
86 |
|
87 |
EVALUATION_QUEUE_TEXT = """
|
@@ -110,7 +113,7 @@ qa_pipe(question=“Where is UMD?”, context=”UMD is in Maryland.”)
|
|
110 |
|
111 |
(4) `Precision` by default is `float16`. You could update it as needed.
|
112 |
|
113 |
-
(5)
|
114 |
|
115 |
Here is a tutorial on how you could make pipe wrappers for submissions: [Colab](https://colab.research.google.com/drive/1bCt2870SdY6tI4uE3JPG8_3nLmNJXX6_?usp=sharing)
|
116 |
"""
|
|
|
75 |
|
76 |
## FAQ
|
77 |
What if my system type is not specified here or not supported yet?
|
78 |
+
- Please send us an email so we could check how we adapt the leaderboard for your purpose. Thanks!
|
79 |
|
80 |
I don't understand where I could start to build a QA system for submission.
|
81 |
- Please check our submission tutorials. From there, you could fine-tune or do anything above the base models.
|
82 |
|
83 |
I want to use API-based QA systems for submission, like GPT4. What should I do?
|
84 |
- We don't support API-based models now but you could train your model with the GPT cache we provided: https://github.com/Pinafore/nlp-hw/tree/master/models.
|
85 |
+
|
86 |
+
I have no ideas why my model is not working. Could you help me?
|
87 |
+
- Yes! After you model submission is evaluated, you could check the first few example details with how scores are calculated [here](https://huggingface.co/datasets/umdclip/qanta_leaderboard_logs)!
|
88 |
"""
|
89 |
|
90 |
EVALUATION_QUEUE_TEXT = """
|
|
|
113 |
|
114 |
(4) `Precision` by default is `float16`. You could update it as needed.
|
115 |
|
116 |
+
(5) You could leave the `Retrieved dataset name` and `Retriever model` fields empty as we provide context for your extractive QA model. Let us know if you want to use your own context or retriver via an email!
|
117 |
|
118 |
Here is a tutorial on how you could make pipe wrappers for submissions: [Colab](https://colab.research.google.com/drive/1bCt2870SdY6tI4uE3JPG8_3nLmNJXX6_?usp=sharing)
|
119 |
"""
|