lewtun HF staff commited on
Commit
4b64f4d
1 Parent(s): ffda6b4

Add baseline example, fix Git commands

Browse files
README.md CHANGED
@@ -10,16 +10,25 @@ This repository can be used to generate a template so you can submit your predic
10
 
11
  ## Quickstart
12
 
13
- ### 1. Create an account and repository on the Hugging Face Hub
14
 
15
  First create an account on the Hugging Face Hub and you can sign up [here](https://huggingface.co/join) if you haven't already!
16
 
17
  ### 2. Create a template repository on your machine
18
 
19
- The next step is to create a template repository on your local machine that contains various files and a CLI to help you validate and submit your predictions. You can run the following commands to create the repository:
20
 
21
  ```bash
22
- # Install the following libraries if you don't have them
 
 
 
 
 
 
 
 
 
23
  pip install cookiecutter huggingface-hub==0.0.16
24
  # Create the template repository
25
  cookiecutter git+https://huggingface.co/datasets/ought/raft-submission
@@ -86,13 +95,11 @@ my-raft-submission
86
 
87
  ### 3. Install the dependencies
88
 
89
- The final step is to install the project's dependencies. We recommend creating a Python virtual environment for the project, e.g. with Anaconda:
90
 
91
  ```bash
92
  # Navigate to the template repository
93
  cd my-raft-submissions
94
- # Create and activate a virtual environment
95
- conda create -n raft python=3.8 && conda activate raft
96
  # Install dependencies
97
  python -m pip install -r requirements.txt
98
  ```
@@ -124,7 +131,31 @@ For each task in RAFT, you should create a CSV file called `predictions.csv` wit
124
  * ID (int)
125
  * Label (string)
126
 
127
- See the dummy predictions in the `data` folder for examples with the expected format. Each `predictions.csv` file should be stored in the task's subfolder in `data` and at the end you should have something like the following:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
128
 
129
  ```
130
  data
 
10
 
11
  ## Quickstart
12
 
13
+ ### 1. Create an account on the Hugging Face Hub
14
 
15
  First create an account on the Hugging Face Hub and you can sign up [here](https://huggingface.co/join) if you haven't already!
16
 
17
  ### 2. Create a template repository on your machine
18
 
19
+ The next step is to create a template repository on your local machine that contains various files and a CLI to help you validate and submit your predictions. The Hugging Face Hub uses [Git Large File Storage (LFS)](https://git-lfs.github.com) to manage large files, so first install it if you don't have it already. For example, on macOS you can run:
20
 
21
  ```bash
22
+ brew install git-lfs
23
+ git lfs install
24
+ ```
25
+
26
+ Next, run the following commands to create the repository. We recommend creating a Python virtual environment for the project, e.g. with Anaconda:
27
+
28
+ ```bash
29
+ # Create and activate a virtual environment
30
+ conda create -n raft python=3.8 && conda activate raft
31
+ # Install the following libraries
32
  pip install cookiecutter huggingface-hub==0.0.16
33
  # Create the template repository
34
  cookiecutter git+https://huggingface.co/datasets/ought/raft-submission
 
95
 
96
  ### 3. Install the dependencies
97
 
98
+ The final step is to install the project's dependencies:
99
 
100
  ```bash
101
  # Navigate to the template repository
102
  cd my-raft-submissions
 
 
103
  # Install dependencies
104
  python -m pip install -r requirements.txt
105
  ```
 
131
  * ID (int)
132
  * Label (string)
133
 
134
+ See the dummy predictions in the `data` folder for examples with the expected format. Here is a simple example that creates a majority-class baseline:
135
+
136
+ ```python
137
+ from pathlib import Path
138
+ import pandas as pd
139
+ from collections import Counter
140
+ from datasets import load_dataset, get_dataset_config_names
141
+
142
+ tasks = get_dataset_config_names("ought/raft")
143
+
144
+ for task in tasks:
145
+ # Load dataset
146
+ raft_subset = load_dataset("ought/raft", task)
147
+ # Compute majority class over training set
148
+ counter = Counter(raft_subset["train"]["Label"])
149
+ majority_class = counter.most_common(1)[0][0]
150
+ # Load predictions file
151
+ preds = pd.read_csv(f"data/{task}/predictions.csv")
152
+ # Convert label IDs to label names
153
+ preds["Label"] = raft_subset["train"].features["Label"].int2str(majority_class)
154
+ # Save predictions
155
+ preds.to_csv(f"data/{task}/predictions.csv", index=False)
156
+ ```
157
+
158
+ As you can see in the example, each `predictions.csv` file should be stored in the task's subfolder in `data` and at the end you should have something like the following:
159
 
160
  ```
161
  data
{{cookiecutter.repo_name}}/README.md CHANGED
@@ -29,7 +29,31 @@ For each task in RAFT, you should create a CSV file called `predictions.csv` wit
29
  * ID (int)
30
  * Label (string)
31
 
32
- See the dummy predictions in the `data` folder for examples with the expected format. Each `predictions.csv` file should be stored in the task's subfolder in `data` and at the end you should have something like the following:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
 
34
  ```
35
  data
 
29
  * ID (int)
30
  * Label (string)
31
 
32
+ See the dummy predictions in the `data` folder for examples with the expected format. Here is a simple example that creates a majority-class baseline:
33
+
34
+ ```python
35
+ from pathlib import Path
36
+ import pandas as pd
37
+ from collections import Counter
38
+ from datasets import load_dataset, get_dataset_config_names
39
+
40
+ tasks = get_dataset_config_names("ought/raft")
41
+
42
+ for task in tasks:
43
+ # Load dataset
44
+ raft_subset = load_dataset("ought/raft", task)
45
+ # Compute majority class over training set
46
+ counter = Counter(raft_subset["train"]["Label"])
47
+ majority_class = counter.most_common(1)[0][0]
48
+ # Load predictions file
49
+ preds = pd.read_csv(f"data/{task}/predictions.csv")
50
+ # Convert label IDs to label names
51
+ preds["Label"] = raft_subset["train"].features["Label"].int2str(majority_class)
52
+ # Save predictions
53
+ preds.to_csv(f"data/{task}/predictions.csv", index=False)
54
+ ```
55
+
56
+ As you can see in the example, each `predictions.csv` file should be stored in the task's subfolder in `data` and at the end you should have something like the following:
57
 
58
  ```
59
  data
{{cookiecutter.repo_name}}/cli.py CHANGED
@@ -70,9 +70,8 @@ def validate():
70
 
71
  @app.command()
72
  def submit():
73
- # TODO(lewtun): Replace with subprocess.run and only add the exact files we need for evaluation
74
  subprocess.call("git pull origin main".split())
75
- subprocess.call(["git", "add", "data"])
76
  subprocess.call(["git", "commit", "-m", "Submission"])
77
  subprocess.call(["git", "push"])
78
 
 
70
 
71
  @app.command()
72
  def submit():
 
73
  subprocess.call("git pull origin main".split())
74
+ subprocess.call(["git", "add", "'*predictions.csv'"])
75
  subprocess.call(["git", "commit", "-m", "Submission"])
76
  subprocess.call(["git", "push"])
77