DagimB ManjinderUNCC commited on
Commit
124a776
1 Parent(s): 0882deb

Update README.md (#7)

Browse files

- Update README.md (2534d9699eb714828e12e40c88dbc6404435ff28)


Co-authored-by: Manjinder <ManjinderUNCC@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +267 -167
README.md CHANGED
@@ -6,171 +6,271 @@ tags:
6
  - huggingface
7
  ---
8
 
9
- # prodigy-ecfr-textcat
10
-
11
- ## About the Project
12
-
13
- Our goal is to organize these financial institution rules and regulations so financial institutions can go through newly created rules and regulations to know which departments to send the information to and to allow easy retrieval of these regulations when necessary. Text mining and information retrieval will allow a large step of the process to be automated. Automating these steps will allow less time and effort to be contributed for financial institutions employees. This allows more time and work to be used to accomplish other projects.
14
-
15
- ## Table of Contents
16
-
17
- - [About the Project](#about-the-project)
18
- - [Getting Started](#getting-started)
19
- - [Prerequisites](#prerequisites)
20
- - [Installation](#installation)
21
- - [Usage](#usage)
22
- - [File Structure](#file-structure)
23
- - [License](#license)
24
- - [Acknowledgements](#acknowledgements)
25
-
26
- ## Getting Started
27
-
28
- Instructions on setting up the project on a local machine.
29
-
30
- ### Prerequisites
31
-
32
- Before running the project, ensure you have the following software dependencies installed:
33
- - [Python 3.x](https://www.python.org/downloads/)
34
- - [spaCy](https://spacy.io/usage)
35
- - [Prodigy](https://prodi.gy/docs/) (optional)
36
-
37
- ### Installation
38
-
39
- Follow these step-by-step instructions to install and configure the project:
40
-
41
- 1. **Clone this repository to your local machine.**
42
- ```bash
43
- git clone <https://github.com/ManjinderUNCC/prodigy-ecfr-textcat.git>
44
- 2. Install the required dependencies by running:
45
- ```bash
46
- pip install -r requirements.txt
47
- ```
48
-
49
- ## Usage
50
-
51
- To use the project, follow these steps:
52
-
53
- 1. **Prepare your data:**
54
- - Place your dataset files in the `/data` directory.
55
- - Optionally, annotate your data using Prodigy and save the annotations in the `/data` directory.
56
-
57
- 2. **Train the text classification model:**
58
- - Run the training script located in the `/python_Code` directory.
59
-
60
- 3. **Evaluate the model:**
61
- - Use the evaluation script to assess the model's performance on labeled data.
62
-
63
- 4. **Make predictions:**
64
- - Apply the trained model to new, unlabeled data to classify it into relevant categories.
65
-
66
-
67
- ## File Structure
68
-
69
- Describe the organization of files and directories within the project.
70
-
71
- - `/corpus`
72
- - `/labels`
73
- - `ner.json`
74
- - `parser.json`
75
- - `tagger.json`
76
- - `textcat_multilabel.json`
77
- - `/data`
78
- - `eval.jsonl`
79
- - `firstStep_file.jsonl`
80
- - `five_examples_annotated5.jsonl`
81
- - `goldenEval.jsonl`
82
- - `thirdStep_file.jsonl`
83
- - `train.jsonl`
84
- - `train200.jsonl`
85
- - `train4465.jsonl`
86
- - `/my_trained_model`
87
- - `/textcat_multilabel`
88
- - `cfg`
89
- - `model`
90
- - `/vocab`
91
- - `key2row`
92
- - `lookups.bin`
93
- - `strings.json`
94
- - `vectors`
95
- - `vectors.cfg`
96
- - `config.cfg`
97
- - `meta.json`
98
- - `tokenizer`
99
- - `/output`
100
- - `/experiment1`
101
- - `/model-best`
102
- - `/textcat_multilabel`
103
- - `cfg`
104
- - `model`
105
- - `/vocab`
106
- - `key2row`
107
- - `lookups.bin`
108
- - `strings.json`
109
- - `vectors`
110
- - `vectors.cfg`
111
- - `config.cfg`
112
- - `meta.json`
113
- - `tokenizer`
114
- - `/model-last`
115
- - `/textcat_multilabel`
116
- - `cfg`
117
- - `model`
118
- - `/vocab`
119
- - `key2row`
120
- - `lookups.bin`
121
- - `strings.json`
122
- - `vectors`
123
- - `vectors.cfg`
124
- - `config.cfg`
125
- - `meta.json`
126
- - `tokenizer`
127
- - `/experiment3`
128
- - `/model-best`
129
- - `/textcat_multilabel`
130
- - `cfg`
131
- - `model`
132
- - `/vocab`
133
- - `key2row`
134
- - `lookups.bin`
135
- - `strings.json`
136
- - `vectors`
137
- - `vectors.cfg`
138
- - `config.cfg`
139
- - `meta.json`
140
- - `tokenizer`
141
- - `/model-last`
142
- - `/textcat_multilabel`
143
- - `cfg`
144
- - `model`
145
- - `/vocab`
146
- - `key2row`
147
- - `lookups.bin`
148
- - `strings.json`
149
- - `vectors`
150
- - `vectors.cfg`
151
- - `config.cfg`
152
- - `meta.json`
153
- - `tokenizer`
154
- - `/python_Code`
155
- - `finalStep-formatLabel.py`
156
- - `firstStep-format.py`
157
- - `five_examples_annotated.ipynb`
158
- - `secondStep-score.py`
159
- - `thirdStep-label.py`
160
- - `train_eval_split.ipynb`
161
- - `TerminalCode.txt`
162
- - `requirements.txt`
163
- - `Terminal Commands vs Project.yml`
164
- - `Project.yml`
165
- - `README.md`
166
- - `prodigy.json`
167
-
168
- ## License
169
-
170
- - Package A: MIT License
171
- - Package B: Apache License 2.0
172
-
173
- ## Acknowledgements
174
-
175
- Manjinder Sandhu, Dagim Bantikassegn, Alex Brooks, Tyler Dabbs
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
176
 
 
6
  - huggingface
7
  ---
8
 
9
+ <!-- WEASEL: AUTO-GENERATED DOCS START (do not remove) -->
10
+
11
+ # 🪐 Weasel Project: Citations of ECFR Banking Regulation in a spaCy pipeline.
12
+
13
+ Custom text classification project for spaCy v3 adapted from the spaCy v3
14
+
15
+ ## 📋 project.yml
16
+
17
+ The [`project.yml`](project.yml) defines the data assets required by the
18
+ project, as well as the available commands and workflows. For details, see the
19
+ [Weasel documentation](https://github.com/explosion/weasel).
20
+
21
+ ### ⏯ Commands
22
+
23
+ The following commands are defined by the project. They
24
+ can be executed using [`weasel run [name]`](https://github.com/explosion/weasel/tree/main/docs/cli.md#rocket-run).
25
+ Commands are only re-run if their inputs have changed.
26
+
27
+ | Command | Description |
28
+ | --- | --- |
29
+ | `format-script` | Execute the Python script `firstStep-format.py`, which performs the initial formatting of a dataset file for the first step of the project. This script extracts text and labels from a dataset file in JSONL format and writes them to a new JSONL file in a specific format.
30
+
31
+ Usage:
32
+ ```
33
+ spacy project run execute-first-step-format-script
34
+ ```
35
+
36
+ Explanation:
37
+ - The script `firstStep-format.py` reads data from the file specified in the `dataset_file` variable (`data/train200.jsonl` by default).
38
+ - It extracts text and labels from each JSON object in the dataset file.
39
+ - If both text and at least one label are available, it writes a new JSON object to the output file specified in the `output_file` variable (`data/firstStep_file.jsonl` by default) with the extracted text and label.
40
+ - If either text or label is missing in a JSON object, a warning message is printed.
41
+ - Upon completion, the script prints a message confirming the processing and the path to the output file.
42
+ |
43
+ | `train-text-classification-model` | Train the text classification model for the second step of the project using the `secondStep-score.py` script. This script loads a blank English spaCy model and adds a text classification pipeline to it. It then trains the model using the processed data from the first step.
44
+
45
+ Usage:
46
+ ```
47
+ spacy project run train-text-classification-model
48
+ ```
49
+
50
+ Explanation:
51
+ - The script `secondStep-score.py` loads a blank English spaCy model and adds a text classification pipeline to it.
52
+ - It reads processed data from the file specified in the `processed_data_file` variable (`data/firstStep_file.jsonl` by default).
53
+ - The processed data is converted to spaCy format for training the model.
54
+ - The model is trained using the converted data for a specified number of iterations (`n_iter`).
55
+ - Losses are printed for each iteration during training.
56
+ - Upon completion, the trained model is saved to the specified output directory (`./my_trained_model` by default).
57
+ |
58
+ | `classify-unlabeled-data` | Classify the unlabeled data for the third step of the project using the `thirdStep-label.py` script. This script loads the trained spaCy model from the previous step and classifies each record in the unlabeled dataset.
59
+
60
+ Usage:
61
+ ```
62
+ spacy project run classify-unlabeled-data
63
+ ```
64
+
65
+ Explanation:
66
+ - The script `thirdStep-label.py` loads the trained spaCy model from the specified model directory (`./my_trained_model` by default).
67
+ - It reads the unlabeled data from the file specified in the `unlabeled_data_file` variable (`data/train.jsonl` by default).
68
+ - Each record in the unlabeled data is classified using the loaded model.
69
+ - The predicted labels for each record are extracted and stored along with the text.
70
+ - The classified data is optionally saved to a file specified in the `output_file` variable (`data/thirdStep_file.jsonl` by default).
71
+ |
72
+ | `format-labeled-data` | Format the labeled data for the final step of the project using the `finalStep-formatLabel.py` script. This script processes the classified data from the third step and transforms it into a specific format, considering a threshold for label acceptance.
73
+
74
+ Usage:
75
+ ```
76
+ spacy project run format-labeled-data
77
+ ```
78
+
79
+ Explanation:
80
+ - The script `finalStep-formatLabel.py` reads classified data from the file specified in the `input_file` variable (`data/thirdStep_file.jsonl` by default).
81
+ - For each record, it determines accepted categories based on a specified threshold.
82
+ - It constructs an output record containing the text, predicted labels, accepted categories, answer (accept/reject), and options with meta information.
83
+ - The transformed data is written to the file specified in the `output_file` variable (`data/train4465.jsonl` by default).
84
+ |
85
+ | `setup-environment` | Set up the Python virtual environment.
86
+ |
87
+ | `review-evaluation-data` | Review the evaluation data in Prodigy and automatically accept annotations.
88
+
89
+ Usage:
90
+ ```
91
+ spacy project run review-evaluation-data
92
+ ```
93
+
94
+ Explanation:
95
+ - The command reviews the evaluation data in Prodigy.
96
+ - It automatically accepts annotations made during the review process.
97
+ - Only sessions allowed by the environment variable PRODIGY_ALLOWED_SESSIONS are permitted to review data. In this case, the session 'reviwer' is allowed.
98
+ |
99
+ | `export-reviewed-evaluation-data` | Export the reviewed evaluation data from Prodigy to a JSONL file named 'goldenEval.jsonl'.
100
+
101
+ Usage:
102
+ ```
103
+ spacy project run export-reviewed-evaluation-data
104
+ ```
105
+
106
+ Explanation:
107
+ - The command exports the reviewed evaluation data from Prodigy to a JSONL file.
108
+ - The data is exported from the Prodigy database associated with the project named 'project3eval-review'.
109
+ - The exported data is saved to the file 'goldenEval.jsonl'.
110
+ - This command helps in preserving the reviewed annotations for further analysis or processing.
111
+ |
112
+ | `import-training-data` | Import the training data into Prodigy from a JSONL file named 'train200.jsonl'.
113
+
114
+ Usage:
115
+ ```
116
+ spacy project run import-training-data
117
+ ```
118
+
119
+ Explanation:
120
+ - The command imports the training data into Prodigy from the specified JSONL file.
121
+ - The data is imported into the Prodigy database associated with the project named 'prodigy3train'.
122
+ - This command prepares the training data for annotation and model training in Prodigy.
123
+ |
124
+ | `import-golden-evaluation-data` | Import the golden evaluation data into Prodigy from a JSONL file named 'goldeneval.jsonl'.
125
+
126
+ Usage:
127
+ ```
128
+ spacy project run import-golden-evaluation-data
129
+ ```
130
+
131
+ Explanation:
132
+ - The command imports the golden evaluation data into Prodigy from the specified JSONL file.
133
+ - The data is imported into the Prodigy database associated with the project named 'golden3'.
134
+ - This command prepares the golden evaluation data for further analysis and model evaluation in Prodigy.
135
+ |
136
+ | `train-model-experiment1` | Train a text classification model using Prodigy with the 'prodigy3train' dataset and evaluating on 'golden3'.
137
+
138
+ Usage:
139
+ ```
140
+ spacy project run train-model-experiment1
141
+ ```
142
+
143
+ Explanation:
144
+ - The command trains a text classification model using Prodigy.
145
+ - It uses the 'prodigy3train' dataset for training and evaluates the model on the 'golden3' dataset.
146
+ - The trained model is saved to the './output/experiment1' directory.
147
+ |
148
+ | `download-model` | Download the English language model 'en_core_web_lg' from spaCy.
149
+
150
+ Usage:
151
+ ```
152
+ spacy project run download-model
153
+ ```
154
+
155
+ Explanation:
156
+ - The command downloads the English language model 'en_core_web_lg' from spaCy.
157
+ - This model is used as the base model for further data processing and training in the project.
158
+ |
159
+ | `convert-data-to-spacy-format` | Convert the annotated data from Prodigy to spaCy format using the 'prodigy3train' and 'golden3' datasets.
160
+
161
+ Usage:
162
+ ```
163
+ spacy project run convert-data-to-spacy-format
164
+ ```
165
+
166
+ Explanation:
167
+ - The command converts the annotated data from Prodigy to spaCy format.
168
+ - It uses the 'prodigy3train' and 'golden3' datasets for conversion.
169
+ - The converted data is saved to the './corpus' directory with the base model 'en_core_web_lg'.
170
+ |
171
+ | `train-custom-model` | Train a custom text classification model using spaCy with the converted data in spaCy format.
172
+
173
+ Usage:
174
+ ```
175
+ spacy project run train-custom-model
176
+ ```
177
+
178
+ Explanation:
179
+ - The command trains a custom text classification model using spaCy.
180
+ - It uses the converted data in spaCy format located in the './corpus' directory.
181
+ - The model is trained using the configuration defined in 'corpus/config.cfg'.
182
+ |
183
+
184
+ ### ⏭ Workflows
185
+
186
+ The following workflows are defined by the project. They
187
+ can be executed using [`weasel run [name]`](https://github.com/explosion/weasel/tree/main/docs/cli.md#rocket-run)
188
+ and will run the specified commands in order. Commands are only re-run if their
189
+ inputs have changed.
190
+
191
+ | Workflow | Steps |
192
+ | --- | --- |
193
+ | `all` | `format-script` &rarr; `train-text-classification-model` &rarr; `classify-unlabeled-data` &rarr; `format-labeled-data` &rarr; `setup-environment` &rarr; `review-evaluation-data` &rarr; `export-reviewed-evaluation-data` &rarr; `import-training-data` &rarr; `import-golden-evaluation-data` &rarr; `train-model-experiment1` &rarr; `download-model` &rarr; `convert-data-to-spacy-format` &rarr; `train-custom-model` |
194
+
195
+ ### 🗂 Assets
196
+
197
+ The following assets are defined by the project. They can
198
+ be fetched by running [`weasel assets`](https://github.com/explosion/weasel/tree/main/docs/cli.md#open_file_folder-assets)
199
+ in the project directory.
200
+
201
+ | File | Source | Description |
202
+ | --- | --- | --- |
203
+ | [`corpus/labels/ner.json`](corpus/labels/ner.json) | Local | JSON file containing NER labels |
204
+ | [`corpus/labels/parser.json`](corpus/labels/parser.json) | Local | JSON file containing parser labels |
205
+ | [`corpus/labels/tagger.json`](corpus/labels/tagger.json) | Local | JSON file containing tagger labels |
206
+ | [`corpus/labels/textcat_multilabel.json`](corpus/labels/textcat_multilabel.json) | Local | JSON file containing multilabel text classification labels |
207
+ | [`data/eval.jsonl`](data/eval.jsonl) | Local | JSONL file containing evaluation data |
208
+ | [`data/firstStep_file.jsonl`](data/firstStep_file.jsonl) | Local | JSONL file containing formatted data from the first step |
209
+ | `data/five_examples_annotated5.jsonl` | Local | JSONL file containing five annotated examples |
210
+ | [`data/goldenEval.jsonl`](data/goldenEval.jsonl) | Local | JSONL file containing golden evaluation data |
211
+ | [`data/thirdStep_file.jsonl`](data/thirdStep_file.jsonl) | Local | JSONL file containing classified data from the third step |
212
+ | [`data/train.jsonl`](data/train.jsonl) | Local | JSONL file containing training data |
213
+ | [`data/train200.jsonl`](data/train200.jsonl) | Local | JSONL file containing initial training data |
214
+ | [`data/train4465.jsonl`](data/train4465.jsonl) | Local | JSONL file containing formatted and labeled training data |
215
+ | [`my_trained_model/textcat_multilabel/cfg`](my_trained_model/textcat_multilabel/cfg) | Local | Configuration files for the text classification model |
216
+ | [`my_trained_model/textcat_multilabel/model`](my_trained_model/textcat_multilabel/model) | Local | Trained model files for the text classification model |
217
+ | [`my_trained_model/vocab/key2row`](my_trained_model/vocab/key2row) | Local | Mapping from keys to row indices in the vocabulary |
218
+ | [`my_trained_model/vocab/lookups.bin`](my_trained_model/vocab/lookups.bin) | Local | Binary lookups file for the vocabulary |
219
+ | [`my_trained_model/vocab/strings.json`](my_trained_model/vocab/strings.json) | Local | JSON file containing string representations of the vocabulary |
220
+ | [`my_trained_model/vocab/vectors`](my_trained_model/vocab/vectors) | Local | Directory containing vector files for the vocabulary |
221
+ | [`my_trained_model/vocab/vectors.cfg`](my_trained_model/vocab/vectors.cfg) | Local | Configuration file for vectors in the vocabulary |
222
+ | [`my_trained_model/config.cfg`](my_trained_model/config.cfg) | Local | Configuration file for the trained model |
223
+ | [`my_trained_model/meta.json`](my_trained_model/meta.json) | Local | JSON file containing metadata for the trained model |
224
+ | [`my_trained_model/tokenizer`](my_trained_model/tokenizer) | Local | Tokenizer files for the trained model |
225
+ | [`output/experiment1/model-best/textcat_multilabel/cfg`](output/experiment1/model-best/textcat_multilabel/cfg) | Local | Configuration files for the best model in experiment 1 |
226
+ | [`output/experiment1/model-best/textcat_multilabel/model`](output/experiment1/model-best/textcat_multilabel/model) | Local | Trained model files for the best model in experiment 1 |
227
+ | [`output/experiment1/model-best/vocab/key2row`](output/experiment1/model-best/vocab/key2row) | Local | Mapping from keys to row indices in the vocabulary for the best model in experiment 1 |
228
+ | [`output/experiment1/model-best/vocab/lookups.bin`](output/experiment1/model-best/vocab/lookups.bin) | Local | Binary lookups file for the vocabulary for the best model in experiment 1 |
229
+ | [`output/experiment1/model-best/vocab/strings.json`](output/experiment1/model-best/vocab/strings.json) | Local | JSON file containing string representations of the vocabulary for the best model in experiment 1 |
230
+ | [`output/experiment1/model-best/vocab/vectors`](output/experiment1/model-best/vocab/vectors) | Local | Directory containing vector files for the vocabulary for the best model in experiment 1 |
231
+ | [`output/experiment1/model-best/vocab/vectors.cfg`](output/experiment1/model-best/vocab/vectors.cfg) | Local | Configuration file for vectors in the vocabulary for the best model in experiment 1 |
232
+ | [`output/experiment1/model-best/config.cfg`](output/experiment1/model-best/config.cfg) | Local | Configuration file for the best model in experiment 1 |
233
+ | [`output/experiment1/model-best/meta.json`](output/experiment1/model-best/meta.json) | Local | JSON file containing metadata for the best model in experiment 1 |
234
+ | [`output/experiment1/model-best/tokenizer`](output/experiment1/model-best/tokenizer) | Local | Tokenizer files for the best model in experiment 1 |
235
+ | [`output/experiment1/model-last/textcat_multilabel/cfg`](output/experiment1/model-last/textcat_multilabel/cfg) | Local | Configuration files for the last model in experiment 1 |
236
+ | [`output/experiment1/model-last/textcat_multilabel/model`](output/experiment1/model-last/textcat_multilabel/model) | Local | Trained model files for the last model in experiment 1 |
237
+ | [`output/experiment1/model-last/vocab/key2row`](output/experiment1/model-last/vocab/key2row) | Local | Mapping from keys to row indices in the vocabulary for the last model in experiment 1 |
238
+ | [`output/experiment1/model-last/vocab/lookups.bin`](output/experiment1/model-last/vocab/lookups.bin) | Local | Binary lookups file for the vocabulary for the last model in experiment 1 |
239
+ | [`output/experiment1/model-last/vocab/strings.json`](output/experiment1/model-last/vocab/strings.json) | Local | JSON file containing string representations of the vocabulary for the last model in experiment 1 |
240
+ | [`output/experiment1/model-last/vocab/vectors`](output/experiment1/model-last/vocab/vectors) | Local | Directory containing vector files for the vocabulary for the last model in experiment 1 |
241
+ | [`output/experiment1/model-last/vocab/vectors.cfg`](output/experiment1/model-last/vocab/vectors.cfg) | Local | Configuration file for vectors in the vocabulary for the last model in experiment 1 |
242
+ | [`output/experiment1/model-last/config.cfg`](output/experiment1/model-last/config.cfg) | Local | Configuration file for the last model in experiment 1 |
243
+ | [`output/experiment1/model-last/meta.json`](output/experiment1/model-last/meta.json) | Local | JSON file containing metadata for the last model in experiment 1 |
244
+ | [`output/experiment1/model-last/tokenizer`](output/experiment1/model-last/tokenizer) | Local | Tokenizer files for the last model in experiment 1 |
245
+ | [`output/experiment3/model-best/textcat_multilabel/cfg`](output/experiment3/model-best/textcat_multilabel/cfg) | Local | Configuration files for the best model in experiment 3 |
246
+ | [`output/experiment3/model-best/textcat_multilabel/model`](output/experiment3/model-best/textcat_multilabel/model) | Local | Trained model files for the best model in experiment 3 |
247
+ | [`output/experiment3/model-best/vocab/key2row`](output/experiment3/model-best/vocab/key2row) | Local | Mapping from keys to row indices in the vocabulary for the best model in experiment 3 |
248
+ | [`output/experiment3/model-best/vocab/lookups.bin`](output/experiment3/model-best/vocab/lookups.bin) | Local | Binary lookups file for the vocabulary for the best model in experiment 3 |
249
+ | [`output/experiment3/model-best/vocab/strings.json`](output/experiment3/model-best/vocab/strings.json) | Local | JSON file containing string representations of the vocabulary for the best model in experiment 3 |
250
+ | [`output/experiment3/model-best/vocab/vectors`](output/experiment3/model-best/vocab/vectors) | Local | Directory containing vector files for the vocabulary for the best model in experiment 3 |
251
+ | [`output/experiment3/model-best/vocab/vectors.cfg`](output/experiment3/model-best/vocab/vectors.cfg) | Local | Configuration file for vectors in the vocabulary for the best model in experiment 3 |
252
+ | [`output/experiment3/model-best/config.cfg`](output/experiment3/model-best/config.cfg) | Local | Configuration file for the best model in experiment 3 |
253
+ | [`output/experiment3/model-best/meta.json`](output/experiment3/model-best/meta.json) | Local | JSON file containing metadata for the best model in experiment 3 |
254
+ | [`output/experiment3/model-best/tokenizer`](output/experiment3/model-best/tokenizer) | Local | Tokenizer files for the best model in experiment 3 |
255
+ | [`output/experiment3/model-last/textcat_multilabel/cfg`](output/experiment3/model-last/textcat_multilabel/cfg) | Local | Configuration files for the last model in experiment 3 |
256
+ | [`output/experiment3/model-last/textcat_multilabel/model`](output/experiment3/model-last/textcat_multilabel/model) | Local | Trained model files for the last model in experiment 3 |
257
+ | [`output/experiment3/model-last/vocab/key2row`](output/experiment3/model-last/vocab/key2row) | Local | Mapping from keys to row indices in the vocabulary for the last model in experiment 3 |
258
+ | [`output/experiment3/model-last/vocab/lookups.bin`](output/experiment3/model-last/vocab/lookups.bin) | Local | Binary lookups file for the vocabulary for the last model in experiment 3 |
259
+ | [`output/experiment3/model-last/vocab/strings.json`](output/experiment3/model-last/vocab/strings.json) | Local | JSON file containing string representations of the vocabulary for the last model in experiment 3 |
260
+ | [`output/experiment3/model-last/vocab/vectors`](output/experiment3/model-last/vocab/vectors) | Local | Directory containing vector files for the vocabulary for the last model in experiment 3 |
261
+ | [`output/experiment3/model-last/vocab/vectors.cfg`](output/experiment3/model-last/vocab/vectors.cfg) | Local | Configuration file for vectors in the vocabulary for the last model in experiment 3 |
262
+ | [`output/experiment3/model-last/config.cfg`](output/experiment3/model-last/config.cfg) | Local | Configuration file for the last model in experiment 3 |
263
+ | [`output/experiment3/model-last/meta.json`](output/experiment3/model-last/meta.json) | Local | JSON file containing metadata for the last model in experiment 3 |
264
+ | [`output/experiment3/model-last/tokenizer`](output/experiment3/model-last/tokenizer) | Local | Tokenizer files for the last model in experiment 3 |
265
+ | [`python_Code/finalStep-formatLabel.py`](python_Code/finalStep-formatLabel.py) | Local | Python script for formatting labeled data in the final step |
266
+ | [`python_Code/firstStep-format.py`](python_Code/firstStep-format.py) | Local | Python script for formatting data in the first step |
267
+ | [`python_Code/five_examples_annotated.ipynb`](python_Code/five_examples_annotated.ipynb) | Local | Jupyter notebook containing five annotated examples |
268
+ | [`python_Code/secondStep-score.py`](python_Code/secondStep-score.py) | Local | Python script for scoring data in the second step |
269
+ | [`python_Code/thirdStep-label.py`](python_Code/thirdStep-label.py) | Local | Python script for labeling data in the third step |
270
+ | [`python_Code/train_eval_split.ipynb`](python_Code/train_eval_split.ipynb) | Local | Jupyter notebook for training and evaluation data splitting |
271
+ | [`TerminalCode.txt`](TerminalCode.txt) | Local | Text file containing terminal code |
272
+ | [`README.md`](README.md) | Local | Markdown file containing project documentation |
273
+ | [`prodigy.json`](prodigy.json) | Local | JSON file containing Prodigy configuration |
274
+
275
+ <!-- WEASEL: AUTO-GENERATED DOCS END (do not remove) -->
276