Spaces:
Runtime error
Runtime error
Upload folder using huggingface_hub
Browse files- __pycache__/content.cpython-310.pyc +0 -0
- __pycache__/scorer.cpython-310.pyc +0 -0
- app.py +1 -1
- content.py +2 -2
__pycache__/content.cpython-310.pyc
ADDED
Binary file (4.97 kB). View file
|
|
__pycache__/scorer.cpython-310.pyc
ADDED
Binary file (2.11 kB). View file
|
|
app.py
CHANGED
@@ -31,7 +31,7 @@ YEAR_VERSION = "2024"
|
|
31 |
|
32 |
os.makedirs("scored", exist_ok=True)
|
33 |
|
34 |
-
all_version = ['
|
35 |
|
36 |
contact_infos = load_dataset(
|
37 |
CONTACT_DATASET,
|
|
|
31 |
|
32 |
os.makedirs("scored", exist_ok=True)
|
33 |
|
34 |
+
all_version = ['20240423']
|
35 |
|
36 |
contact_infos = load_dataset(
|
37 |
CONTACT_DATASET,
|
content.py
CHANGED
@@ -18,8 +18,8 @@ Results can be submitted for only validation. Scores are expressed as the averag
|
|
18 |
For each task, if the 'final_answer' is correct, you will get a full score of 100. If it is not correct, we will score the 'score_answer' which is explained in the score field of the data set. If a question in the validation set is not found in your submission, the score for that question will be 0.
|
19 |
We expect submissions to be json-line files with the following format. The first three fields are mandatory:
|
20 |
```
|
21 |
-
{"task_name": "task_name", "final_answer": "flag{...}
|
22 |
-
{"task_name": "task_name", "final_answer": "flag{...}
|
23 |
```
|
24 |
"""
|
25 |
_INTRODUCTION_TEXT = """
|
|
|
18 |
For each task, if the 'final_answer' is correct, you will get a full score of 100. If it is not correct, we will score the 'score_answer' which is explained in the score field of the data set. If a question in the validation set is not found in your submission, the score for that question will be 0.
|
19 |
We expect submissions to be json-line files with the following format. The first three fields are mandatory:
|
20 |
```
|
21 |
+
{"task_name": "task_name", "final_answer": "flag{...}", "score_answer": ["answer1", "answer2", "answer3"]}
|
22 |
+
{"task_name": "task_name", "final_answer": "flag{...}", "score_answer": ["answer1", "answer2", "answer3"]}
|
23 |
```
|
24 |
"""
|
25 |
_INTRODUCTION_TEXT = """
|