alea-institute commited on
Commit
f8e351e
1 Parent(s): a3cc47f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +122 -148
README.md CHANGED
@@ -1,199 +1,173 @@
1
  ---
2
- library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
 
 
9
 
 
10
 
 
11
 
12
- ## Model Details
13
-
14
- ### Model Description
15
-
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
-
36
- ## Uses
37
-
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
- ### Direct Use
41
-
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
 
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
-
58
- ## Bias, Risks, and Limitations
59
-
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
76
- ## Training Details
77
-
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
 
113
- [More Information Needed]
114
 
115
- #### Factors
116
 
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
 
 
 
 
 
 
118
 
119
- [More Information Needed]
120
 
121
- #### Metrics
122
 
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
 
125
- [More Information Needed]
126
 
127
- ### Results
128
 
129
- [More Information Needed]
 
 
130
 
131
- #### Summary
132
 
 
 
 
133
 
 
 
 
 
134
 
135
- ## Model Examination [optional]
 
136
 
137
- <!-- Relevant interpretability work for the model goes here -->
 
 
 
 
 
 
 
 
 
 
 
138
 
139
- [More Information Needed]
140
 
141
- ## Environmental Impact
 
 
142
 
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
 
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
 
 
146
 
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
 
 
152
 
153
- ## Technical Specifications [optional]
154
 
155
- ### Model Architecture and Objective
 
 
 
 
 
156
 
157
- [More Information Needed]
158
 
159
- ### Compute Infrastructure
160
 
161
- [More Information Needed]
 
162
 
163
- #### Hardware
 
164
 
165
- [More Information Needed]
166
 
167
- #### Software
168
 
169
- [More Information Needed]
 
170
 
171
- ## Citation [optional]
172
 
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
 
175
- **BibTeX:**
 
176
 
177
- [More Information Needed]
 
 
 
 
178
 
179
- **APA:**
180
 
181
- [More Information Needed]
 
 
 
 
 
182
 
183
- ## Glossary [optional]
184
 
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
 
187
- [More Information Needed]
 
188
 
189
- ## More Information [optional]
 
190
 
191
- [More Information Needed]
192
 
193
- ## Model Card Authors [optional]
194
 
195
- [More Information Needed]
196
 
197
- ## Model Card Contact
 
198
 
199
- [More Information Needed]
 
1
  ---
2
+ language:
3
+ - en
4
+ - es
5
+ - fr
6
+ - de
7
+ - it
8
+ - pl
9
+ - pt
10
+ library_name: tokenizers
11
+ license: cc-by-4.0
12
+ tags:
13
+ - kl3m
14
+ - kl3m-004
15
+ - alea
16
+ - legal
17
+ - financial
18
+ date: '2024-11-07T00:00:00.000Z'
19
  ---
20
 
21
+ # kl3m-004-128k-uncased tokenizer
22
 
23
+ The `kl3m-004-128k-uncased` **case-insensitive** tokenizer is a domain-specific tokenizer trained on a stratified sample of nearly 4M
24
+ documents across general, legal, and financial domains from the `kl3m-data` project, including American English,
25
+ British English, Spanish, German, French, Italian, and other common EU languages.
26
 
27
+ This tokenizer is being used for the next generation of KL3M embedding and generative models.
28
 
29
+ Please see `kl3m-001-32k` and `kl3m-003-64k` for the first iteration of our research on domain-specific tokenization.
30
 
31
+ Note that we are providing both uncased and cased versions of the 128K tokenizer, unlike prior tokenizers, as this was
32
+ required to achieve SotA in-domain performance for embedding models on legal and financial text.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
 
34
+ ## Model Details
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
 
 
36
 
37
+ ### Summary
38
 
39
+ - **Vocabulary**: 131,072
40
+ - **Tokenizer type:** BPE
41
+ - **Special token support:** Both causal and masked language modeling
42
+ - **Language(s) (NLP):** Primarily English, Spanish, German, French, with a small percentage of other EU languages.
43
+ - **Data Sources**: See [`kl3m-data`](https://github.com/alea-institute/kl3m-data) repository.
44
+ - **Developed by:** [ALEA Institute](https://aleainstitute.ai).
45
+ - **License:** [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/)
46
 
 
47
 
48
+ ### Model Description
49
 
50
+ The `kl3m-004-128k-uncased` tokenizer is a domain-specific tokenizer trained on ~1.5T tokens of financial and legal text from primarily-English sources.
51
 
52
+ This tokenizer is notable for a number of reasons:
53
 
54
+ #### Domain Specific
55
 
56
+ As part of our research on more efficient SLM training for the legal and financial domain, we
57
+ trained a domain-specific tokenizer on a large corpus of financial and legal text. This tokenizer
58
+ has not, for example, seen any common general pretrain sources like Wikipedia or Common Crawl.
59
 
60
+ #### Large Added Token Set
61
 
62
+ As part of our research on efficient and reliable extraction and generation, we inserted
63
+ a large numer of deterministic "whole" tokens into the tokenizer, such as HTML tags
64
+ like `<span`, common Markdown elements like `#` and `##`, and legal enumerations like `(a)`.
65
 
66
+ **Note that the kl3m-004-128k-uncased tokenizer has added a number of additional citation formats that were not
67
+ included in the kl3m-001-32k tokenizer.** These were primarily sourced from empirical data and
68
+ the [Free Law Project's reporters-db](https://raw.githubusercontent.com/freelawproject/reporters-db/main/reporters_db/data/),
69
+ which were added to the tokenizer to improve model behavior related to legal citations.
70
 
71
+ See the `get_custom_tokens` method in `kl3m_embeddings/training/kl3m_004/train_tokenizer.py` for
72
+ more details:
73
 
74
+ ```python
75
+ def get_custom_tokens(
76
+ include_whitespace: bool = True,
77
+ include_markdown: bool = True,
78
+ include_html: bool = True,
79
+ include_json: bool = True,
80
+ include_xml: bool = True,
81
+ include_years: bool = True,
82
+ include_citations: bool = True,
83
+ lowercase: bool = False,
84
+ ) -> list[str]:
85
+ ```
86
 
87
+ #### Space Preservation
88
 
89
+ Unlike `kl3m-001-32k`, we *do not* retain the space character as a token. This was done after adding additional legal
90
+ citation tokens to the vocabulary, which reduced the number of issues related to space tokenization in legal text. This
91
+ means that the `kl3m-004-128k-uncased` tokenizer uses substantially fewer tokens than `kl3m-001-32k` for most text.
92
 
93
+ #### Special Tokens for both Embedding and Generative Models
94
 
95
+ For both training and inference efficiency, we intended this tokenizer vocabulary to be
96
+ usable for both embedding and generative models. As such, we included special tokens
97
+ suitable for both causal and masked language modeling tasks.
98
 
99
+ * `<|start|>`: `0`
100
+ * `<|end|>`: `1`
101
+ * `<|pad|>`: `2`
102
+ * `<|unk|>`: `3`
103
+ * `<|sep|>`: `4`
104
+ * `<|cls|>`: `5`
105
+ * `<|mask|>`: `6`
106
 
107
+ We also added a number of chat and instruction tokens that were not included in `kl3m-001-32k`, including:
108
 
109
+ * `<|system|>`: `7`
110
+ * `</|system|>`: `8`
111
+ * `<|user|>`: `9`
112
+ * `</|user|>`: `10`
113
+ * `<|instruction|>`: `11`
114
+ * `</|instruction|>`: `12`
115
 
116
+ These tokens are identical to those used in the `kl3m-003-64k` tokenizer.
117
 
118
+ ### Replication
119
 
120
+ The entire data collection and preprocesing pipeline is being made available, along with
121
+ training data, as part of the [ALEA Institute](https://aleainstitute.ai) [KL3M project](https://aleainstitute.ai/work/kl3m/).
122
 
123
+ The source code to used to train the tokenizer is available on GitHub at:
124
+ [https://github.com/alea-institute/kl3m-embedding-research](https://github.com/alea-institute/kl3m-embedding-research)
125
 
126
+ The data pipeline will be available on GitHub and S3 in the near future.
127
 
128
+ ## Uses
129
 
130
+ This tokenizer is intended to be used for English, Spanish, German, or French language text in professional contexts
131
+ such as legal and financial documents.
132
 
133
+ ### Recommendations
134
 
135
+ In general, the `kl3m-004-128k-uncased` tokenizer is recommended over the original `kl3m-001-32k` tokenizer.
136
 
137
+ ```text
138
+ Original text: The Comptroller of the Currency shall have the same authority with respect to functions transferred to the Comptroller of the Currency under the Enhancing Financial Institution Safety and Soundness Act of 2010 as was vested in the Director of the Office of Thrift Supervision on the transfer date, as defined in section 311 of that Act [12 U.S.C. 5411].
139
 
140
+ kl3m-003-64
141
+ -----------
142
+ Size: 67
143
+ Tokens: ['The', ' Comptroller', ' of', ' the', ' Currency', ' shall', ' have', ' the', ' same', ' authority', ' with', ' respect', ' to', ' functions', ' transferred', ' to', ' the', ' Comptroller', ' of', ' the', ' Currency', ' under', ' the', ' Enh', 'ancing', ' Financial', ' Institution', ' Safety', ' and', ' ', 'Sound', 'ness', ' Act', ' of', ' 2010', ' as', ' was', ' vested', ' in', ' the', ' Director', ' of', ' the', ' Office', ' of', ' Thrift', ' Supervision', ' on', ' the', ' transfer', ' date', ',', ' as', ' defined', ' in', ' section', ' 311', ' of', ' that', ' Act', ' [', '12', ' ', 'U.S.C.', ' 54', '11', '].']
144
+ IDs: [671, 13273, 295, 281, 25922, 735, 704, 281, 1913, 2451, 440, 1894, 312, 5860, 7264, 312, 281, 13273, 295, 281, 25922, 621, 281, 18926, 4406, 3195, 24448, 5617, 310, 233, 63589, 2130, 854, 295, 1611, 398, 725, 11978, 300, 281, 2827, 295, 281, 1767, 295, 44029, 37141, 395, 281, 3696, 1548, 24, 398, 3011, 300, 782, 6590, 295, 407, 854, 1327, 524, 233, 63761, 3789, 547, 8578]
145
 
 
146
 
147
+ kl3m-004-128k-uncased
148
+ ---------------------
149
+ Size: 64
150
+ Tokens: ['the', ' comptroller', ' of', ' the', ' currency', ' shall', ' have', ' the', ' same', ' authority', ' with', ' respect', ' to', ' functions', ' transferred', ' to', ' the', ' comptroller', ' of', ' the', ' currency', ' under', ' the', ' enhancing', ' financial', ' institution', ' safety', ' and', ' soundness', ' act', ' of', ' 2010', ' as', ' was', ' vested', ' in', ' the', ' director', ' of', ' the', ' office', ' of', ' thrift', ' supervision', ' on', ' the', ' transfer', ' date', ',', ' as', ' defined', ' in', ' section', ' 311', ' of', ' that', ' act', ' [', '12', ' ', 'u.s.c.', ' 54', '11', '].']
151
+ IDs: [536, 16356, 292, 281, 4272, 460, 628, 281, 1552, 1545, 397, 882, 309, 4378, 4032, 309, 281, 16356, 292, 281, 4272, 539, 281, 21164, 1271, 3843, 2737, 313, 35934, 638, 292, 2371, 363, 611, 5286, 298, 281, 2456, 292, 281, 1652, 292, 25900, 7290, 390, 281, 1397, 643, 24, 363, 1921, 298, 590, 12646, 292, 384, 638, 745, 629, 233, 128952, 3834, 571, 4442]
152
+ ```
153
 
154
+ ## How to Get Started with the Model
155
 
156
+ Use the code below to get started with the model.
157
 
158
+ ```
159
+ from tokenizers import Tokenizer
160
 
161
+ tokenizer = Tokenizer.from_pretrained('alea-institute/kl3m-004-128k-uncased')
162
+ ```
163
 
164
+ ## Citation
165
 
166
+ Tokenizer and dataset publications are pending.
167
 
168
+ ## Contact
169
 
170
+ For any questions, please contact [ALEA Institute](https://aleainstitute.ai) at [hello@aleainstitute.ai](mailto:hello@aleainstitute.ai) or
171
+ create an issue on this repository or [GitHub](https://github.com/alea-institute/kl3m-embedding-research).
172
 
173
+ ![logo](https://aleainstitute.ai/images/alea-logo-ascii-1x1.png)