DagimB commited on
Commit
8e04b77
1 Parent(s): 037ab80

Delete README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -167
README.md DELETED
@@ -1,167 +0,0 @@
1
- # prodigy-ecfr-textcat
2
-
3
- ## About the Project
4
-
5
- Our goal is to organize these financial institution rules and regulations so financial institutions can go through newly created rules and regulations to know which departments to send the information to and to allow easy retrieval of these regulations when necessary. Text mining and information retrieval will allow a large step of the process to be automated. Automating these steps will allow less time and effort to be contributed for financial institutions employees. This allows more time and work to be used to accomplish other projects.
6
-
7
- ## Table of Contents
8
-
9
- - [About the Project](#about-the-project)
10
- - [Getting Started](#getting-started)
11
- - [Prerequisites](#prerequisites)
12
- - [Installation](#installation)
13
- - [Usage](#usage)
14
- - [File Structure](#file-structure)
15
- - [License](#license)
16
- - [Acknowledgements](#acknowledgements)
17
-
18
- ## Getting Started
19
-
20
- Instructions on setting up the project on a local machine.
21
-
22
- ### Prerequisites
23
-
24
- Before running the project, ensure you have the following software dependencies installed:
25
- - [Python 3.x](https://www.python.org/downloads/)
26
- - [spaCy](https://spacy.io/usage)
27
- - [Prodigy](https://prodi.gy/docs/) (optional)
28
-
29
- ### Installation
30
-
31
- Follow these step-by-step instructions to install and configure the project:
32
-
33
- 1. **Clone this repository to your local machine.**
34
- ```bash
35
- git clone <https://github.com/ManjinderUNCC/prodigy-ecfr-textcat.git>
36
- 2. Install the required dependencies by running:
37
- ```bash
38
- pip install -r requirements.txt
39
- ```
40
-
41
- ## Usage
42
-
43
- To use the project, follow these steps:
44
-
45
- 1. **Prepare your data:**
46
- - Place your dataset files in the `/data` directory.
47
- - Optionally, annotate your data using Prodigy and save the annotations in the `/data` directory.
48
-
49
- 2. **Train the text classification model:**
50
- - Run the training script located in the `/python_Code` directory.
51
-
52
- 3. **Evaluate the model:**
53
- - Use the evaluation script to assess the model's performance on labeled data.
54
-
55
- 4. **Make predictions:**
56
- - Apply the trained model to new, unlabeled data to classify it into relevant categories.
57
-
58
-
59
- ## File Structure
60
-
61
- Describe the organization of files and directories within the project.
62
-
63
- - `/corpus`
64
- - `/labels`
65
- - `ner.json`
66
- - `parser.json`
67
- - `tagger.json`
68
- - `textcat_multilabel.json`
69
- - `/data`
70
- - `eval.jsonl`
71
- - `firstStep_file.jsonl`
72
- - `five_examples_annotated5.jsonl`
73
- - `goldenEval.jsonl`
74
- - `thirdStep_file.jsonl`
75
- - `train.jsonl`
76
- - `train200.jsonl`
77
- - `train4465.jsonl`
78
- - `/my_trained_model`
79
- - `/textcat_multilabel`
80
- - `cfg`
81
- - `model`
82
- - `/vocab`
83
- - `key2row`
84
- - `lookups.bin`
85
- - `strings.json`
86
- - `vectors`
87
- - `vectors.cfg`
88
- - `config.cfg`
89
- - `meta.json`
90
- - `tokenizer`
91
- - `/output`
92
- - `/experiment1`
93
- - `/model-best`
94
- - `/textcat_multilabel`
95
- - `cfg`
96
- - `model`
97
- - `/vocab`
98
- - `key2row`
99
- - `lookups.bin`
100
- - `strings.json`
101
- - `vectors`
102
- - `vectors.cfg`
103
- - `config.cfg`
104
- - `meta.json`
105
- - `tokenizer`
106
- - `/model-last`
107
- - `/textcat_multilabel`
108
- - `cfg`
109
- - `model`
110
- - `/vocab`
111
- - `key2row`
112
- - `lookups.bin`
113
- - `strings.json`
114
- - `vectors`
115
- - `vectors.cfg`
116
- - `config.cfg`
117
- - `meta.json`
118
- - `tokenizer`
119
- - `/experiment3`
120
- - `/model-best`
121
- - `/textcat_multilabel`
122
- - `cfg`
123
- - `model`
124
- - `/vocab`
125
- - `key2row`
126
- - `lookups.bin`
127
- - `strings.json`
128
- - `vectors`
129
- - `vectors.cfg`
130
- - `config.cfg`
131
- - `meta.json`
132
- - `tokenizer`
133
- - `/model-last`
134
- - `/textcat_multilabel`
135
- - `cfg`
136
- - `model`
137
- - `/vocab`
138
- - `key2row`
139
- - `lookups.bin`
140
- - `strings.json`
141
- - `vectors`
142
- - `vectors.cfg`
143
- - `config.cfg`
144
- - `meta.json`
145
- - `tokenizer`
146
- - `/python_Code`
147
- - `finalStep-formatLabel.py`
148
- - `firstStep-format.py`
149
- - `five_examples_annotated.ipynb`
150
- - `secondStep-score.py`
151
- - `thirdStep-label.py`
152
- - `train_eval_split.ipynb`
153
- - `TerminalCode.txt`
154
- - `requirements.txt`
155
- - `Terminal Commands vs Project.yml`
156
- - `Project.yml`
157
- - `README.md`
158
- - `prodigy.json`
159
-
160
- ## License
161
-
162
- - Package A: MIT License
163
- - Package B: Apache License 2.00
164
-
165
- ## Acknowledgements
166
-
167
- Manjinder Sandhu, Dagim Bantikassegn, Alex Brooks, Tyler Dabbs