thebajajra commited on
Commit
3f6aafd
·
verified ·
1 Parent(s): d73d99d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +208 -0
README.md CHANGED
@@ -34,3 +34,211 @@ tags:
34
  - foundation-model
35
  ---
36
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
  - foundation-model
35
  ---
36
 
37
+ # RexBERT-micro
38
+
39
+ > **TL;DR**: An encoder-only transformer (ModernBERT-style) for **e-commerce** applications, trained in three phases—**Pre-training**, **Context Extension**, and **Decay**—to power product search, attribute extraction, classification, and embeddings use cases. The model has been trained on 2.3T+ tokens along with 350B+ e-commerce-specific tokens
40
+
41
+ ---
42
+
43
+ ## Table of Contents
44
+ - [Quick Start](#quick-start)
45
+ - [Intended Uses & Limitations](#intended-uses--limitations)
46
+ - [Model Description](#model-description)
47
+ - [Training Recipe](#training-recipe)
48
+ - [Data Overview](#data-overview)
49
+ - [Evaluation](#evaluation)
50
+ - [Usage Examples](#usage-examples)
51
+ - [Masked language modeling](#1-masked-language-modeling)
52
+ - [Embeddings / feature extraction](#2-embeddings--feature-extraction)
53
+ - [Text classification fine-tune](#3-text-classification-fine-tune)
54
+ - [Model Architecture & Compatibility](#model-architecture--compatibility)
55
+ - [Efficiency & Deployment Tips](#efficiency--deployment-tips)
56
+ - [Responsible & Safe Use](#responsible--safe-use)
57
+ - [License](#license)
58
+ - [Maintainers & Contact](#maintainers--contact)
59
+ - [Citation](#citation)
60
+
61
+ ---
62
+
63
+ ## Quick Start
64
+
65
+ ```python
66
+ import torch
67
+ from transformers import AutoTokenizer, AutoModel, AutoModelForMaskedLM, pipeline
68
+
69
+ MODEL_ID = "thebajajra/RexBERT-micro"
70
+
71
+ # Tokenizer
72
+ tok = AutoTokenizer.from_pretrained(MODEL_ID, use_fast=True)
73
+
74
+ # 1) Fill-Mask (if MLM head is present)
75
+ mlm = pipeline("fill-mask", model=MODEL_ID, tokenizer=tok)
76
+ print(mlm("These running shoes are great for [MASK] training."))
77
+
78
+ # 2) Feature extraction (CLS or mean-pooled embeddings)
79
+ enc = AutoModel.from_pretrained(MODEL_ID)
80
+ inputs = tok(["wireless mouse", "ergonomic mouse pad"], padding=True, truncation=True, return_tensors="pt")
81
+ with torch.no_grad():
82
+ out = enc(**inputs, output_hidden_states=True)
83
+ # Mean-pool last hidden state for sentence embeddings
84
+ emb = (out.last_hidden_state * inputs.attention_mask.unsqueeze(-1)).sum(dim=1) / inputs.attention_mask.sum(dim=1, keepdim=True)
85
+ ```
86
+
87
+
88
+ ---
89
+
90
+ ## Intended Uses & Limitations
91
+
92
+ **Use cases**
93
+ - Product & query **retrieval/semantic search** (titles, descriptions, attributes)
94
+ - **Attribute extraction** / slot filling (brand, color, size, material)
95
+ - **Classification** (category assignment, unsafe/regulated item filtering, review sentiment)
96
+ - **Reranking** and **query understanding** (spelling/ASR normalization, acronym expansion)
97
+
98
+ **Out of scope**
99
+ - Long-form **generation** (use a decoder/seq-to-seq LM instead)
100
+ - High-stakes decisions without human review (pricing, compliance, safety flags)
101
+
102
+ **Target users**
103
+ - Search/recs engineers, e-commerce data teams, ML researchers working on domain-specific encoders
104
+
105
+ ---
106
+
107
+ ## Model Description
108
+
109
+ RexBERT-micro is an **encoder-only**, 150M parameter transformer trained with a masked-language-modeling objective and optimized for **e-commerce related text**. The three-phase training curriculum improves general language understanding, extends context handling, and then **specializes** on a very large corpus of commerce data to capture domain-specific terminology and entity distributions.
110
+
111
+ ---
112
+
113
+ ## Training Recipe
114
+
115
+ RexBERT-micro was trained in **three phases**:
116
+
117
+ 1) **Pre-training**
118
+ General-purpose MLM pre-training on diverse English text for robust linguistic representations.
119
+
120
+ 2) **Context Extension**
121
+ Continued training with **increased max sequence length** to better handle long product pages, concatenated attribute blocks, multi-turn queries, and facet strings. This preserves prior capabilities while expanding context handling.
122
+
123
+ 3) **Decay on 350B+ e-commerce tokens**
124
+ Final specialization stage on **350B+ domain-specific tokens** (product catalogs, queries, reviews, taxonomy/attributes). Learning rate and sampling weights are annealed (decayed) to consolidate domain knowledge and stabilize performance on commerce tasks.
125
+
126
+ **Training details (fill in):**
127
+ - Optimizer / LR schedule: TODO
128
+ - Effective batch size / steps per phase: TODO
129
+ - Context lengths per phase (e.g., 512 → 1k/2k): TODO
130
+ - Tokenizer/vocab: TODO
131
+ - Hardware & wall-clock: TODO
132
+ - Checkpoint tags: TODO (e.g., `pretrain`, `ext`, `decay`)
133
+
134
+ ---
135
+
136
+ ## Data Overview
137
+
138
+ - **Domain mix:**
139
+ - **Data quality:**
140
+
141
+
142
+
143
+ ---
144
+
145
+ ## Evaluation
146
+
147
+
148
+ ### Performance Highlights
149
+
150
+
151
+
152
+
153
+ ---
154
+
155
+ ## Usage Examples
156
+
157
+ ### 1) Masked language modeling
158
+ ```python
159
+ from transformers import AutoModelForMaskedLM, AutoTokenizer, pipeline
160
+
161
+ m = AutoModelForMaskedLM.from_pretrained("thebajajra/RexBERT-micro")
162
+ t = AutoTokenizer.from_pretrained("thebajajra/RexBERT-micro")
163
+ fill = pipeline("fill-mask", model=m, tokenizer=t)
164
+
165
+ fill("Best [MASK] headphones under $100.")
166
+ ```
167
+
168
+ ### 2) Embeddings / feature extraction
169
+ ```python
170
+ import torch
171
+ from transformers import AutoTokenizer, AutoModel
172
+
173
+ tok = AutoTokenizer.from_pretrained("thebajajra/RexBERT-micro")
174
+ enc = AutoModel.from_pretrained("thebajajra/RexBERT-micro")
175
+
176
+ texts = ["nike air zoom pegasus 40", "running shoes pegasus zoom nike"]
177
+ batch = tok(texts, padding=True, truncation=True, return_tensors="pt")
178
+
179
+ with torch.no_grad():
180
+ out = enc(**batch)
181
+ # Mean-pool last hidden state
182
+ attn = batch["attention_mask"].unsqueeze(-1)
183
+ emb = (out.last_hidden_state * attn).sum(1) / attn.sum(1)
184
+ # Normalize for cosine similarity (recommended for retrieval)
185
+ emb = torch.nn.functional.normalize(emb, p=2, dim=1)
186
+ ```
187
+
188
+ ### 3) Text classification fine-tune
189
+ ```python
190
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification, TrainingArguments, Trainer
191
+
192
+ tok = AutoTokenizer.from_pretrained("thebajajra/RexBERT-micro")
193
+ model = AutoModelForSequenceClassification.from_pretrained("thebajajra/RexBERT-micro", num_labels=NUM_LABELS)
194
+
195
+ # Prepare your Dataset objects: train_ds, val_ds (text→label)
196
+ args = TrainingArguments(
197
+ per_device_train_batch_size=32,
198
+ per_device_eval_batch_size=32,
199
+ learning_rate=3e-5,
200
+ num_train_epochs=3,
201
+ evaluation_strategy="steps",
202
+ fp16=True,
203
+ report_to="none",
204
+ load_best_model_at_end=True,
205
+ )
206
+
207
+ trainer = Trainer(model=model, args=args, train_dataset=train_ds, eval_dataset=val_ds, tokenizer=tok)
208
+ trainer.train()
209
+ ```
210
+
211
+ ---
212
+
213
+ ## Model Architecture & Compatibility
214
+
215
+ - **Architecture:** Encoder-only, ModernBERT-style **micro** model.
216
+ - **Libraries:** Works with **🤗 Transformers**; supports **fill-mask** and **feature-extraction** pipelines.
217
+ - **Context length:** Increased during the **Context Extension** phase—ensure `max_position_embeddings` in `config.json` matches your desired max length.
218
+ - **Files:** `config.json`, tokenizer files, and (optionally) heads for MLM or classification.
219
+ - **Export:** Standard PyTorch weights; you can export ONNX / TorchScript for production if needed.
220
+
221
+ ---
222
+
223
+ ## Responsible & Safe Use
224
+
225
+ - **Biases:** Commerce data can encode brand, price, and region biases; audit downstream classifiers/retrievers for disparate error rates across categories/regions.
226
+ - **Sensitive content:** Add filters for adult/regulated items; document moderation thresholds if you release classifiers.
227
+ - **Privacy:** Do not expose PII; ensure training data complies with terms and applicable laws.
228
+ - **Misuse:** This model is **not** a substitute for legal/compliance review for listings.
229
+
230
+ ---
231
+
232
+ ## License
233
+
234
+ - **License:** `apache-2.0`.
235
+ ---
236
+
237
+ ## Maintainers & Contact
238
+
239
+ - **Author/maintainer:** [Rahul Bajaj](https://huggingface.co/thebajajra)
240
+
241
+ ---
242
+
243
+
244
+ ---