Ill-Ness commited on
Commit
0d9572e
·
verified ·
1 Parent(s): 1e60963

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +272 -3
README.md CHANGED
@@ -1,3 +1,272 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: token-classification
4
+ tags:
5
+ - privacy
6
+ - pii-detection
7
+ - pii-redaction
8
+ - token-classification
9
+ - sliding-window-attention
10
+ - rope
11
+ - swiglu
12
+ language:
13
+ - en
14
+ ---
15
+
16
+ # Context-Filter
17
+
18
+ [![License: Apache 2.0](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](LICENSE)
19
+ [![Parameters](https://img.shields.io/badge/Parameters-38M-brightgreen)]()
20
+ [![Context](https://img.shields.io/badge/Context-32K%20tokens-orange)]()
21
+ [![Entities](https://img.shields.io/badge/Entities-12%20PII%20types-red)]()
22
+ [![Architecture](https://img.shields.io/badge/Architecture-Custom%20GQA%20%2B%20SWA-purple)]()
23
+
24
+ > Context-Filter is a compact, purpose-built privacy filtering model for real-time PII detection and redaction. At 38M parameters it runs comfortably on CPU or any consumer GPU, and supports sequences up to 32,768 tokens via Sliding Window Attention. It ships with a built-in regex hybrid layer ensuring near-zero false negatives on structured formats such as emails, IPs, and social security numbers.
25
+
26
+ ---
27
+
28
+ ## Highlights
29
+
30
+ - **Custom Architecture — Not a Fine-Tune**: Context-Filter is trained from scratch using a purpose-designed encoder: Grouped Query Attention (8Q / 4KV heads), RMSNorm, RoPE with θ = 500,000, and SwiGLU FFNs. No base model weights are reused.
31
+
32
+ - **32K Context via Sliding Window Attention**: Each token attends to a local window of ±512 tokens. Memory scales as O(n · w) rather than O(n²), making long-document redaction practical on commodity hardware.
33
+
34
+ - **12 PII Entity Classes**: Covers personal identity, financial, network, and government-issued identifiers across a single BIO tagging head.
35
+
36
+ - **Focal Loss Training**: Trained with focal loss (γ = 2.0) to suppress the dominant O-label class and sharpen precision on rare entity spans.
37
+
38
+ - **Dual Output Modes**: Returns either semantic labels (`private_email`) or bracketed redaction tags (`[EMAIL]`), selectable per call.
39
+
40
+ - **Per-Entity Confidence Scores**: Every detected span carries a softmax confidence value, enabling downstream threshold filtering.
41
+
42
+ - **Regex Hybrid Layer**: A built-in post-processing pass applies deterministic regex patterns for structured PII formats, guaranteeing recall on well-defined identifiers regardless of model uncertainty.
43
+
44
+ - **Multilingual Coverage**: Trained on synthetic data from 15 locales spanning English, German, French, Swedish, Italian, Spanish, Dutch, Portuguese, Polish, Czech, Danish, and Finnish.
45
+
46
+ ---
47
+
48
+ ## Model Overview
49
+
50
+ | Property | Value |
51
+ |---|---|
52
+ | **Type** | Token Classification (BIO NER) |
53
+ | **Architecture** | Custom Encoder (Context-Filter) |
54
+ | **Training** | From scratch — synthetic data only |
55
+ | **Parameters** | ~38M |
56
+ | **Context Length** | 32,768 tokens |
57
+ | **VRAM (bfloat16)** | ~152 MB |
58
+ | **VRAM (int8)** | ~76 MB |
59
+ | **Tokenizer** | GPT-2 BPE (50,257 vocabulary) |
60
+
61
+ ### Architecture Specification
62
+
63
+ | Component | Value |
64
+ |---|---|
65
+ | Hidden Dimension | 512 |
66
+ | Number of Layers | 10 |
67
+ | Attention Heads (Q / KV) | 8 / 4 (GQA) |
68
+ | Head Dimension | 64 |
69
+ | FFN Intermediate Dimension | 1,792 |
70
+ | FFN Activation | SwiGLU |
71
+ | Attention Pattern | Sliding Window (window = 512) |
72
+ | Position Encoding | RoPE (θ = 500,000) |
73
+ | Normalisation | RMSNorm (ε = 1e-6) |
74
+ | Vocabulary Size | 50,257 |
75
+ | Context Length | 32,768 tokens |
76
+
77
+ ### Entity Classes
78
+
79
+ | Label | Type | Examples |
80
+ |---|---|---|
81
+ | `PERSON` | Full names | *Jane Smith*, *Dr. Erik Larsson* |
82
+ | `EMAIL` | Email addresses | *user@domain.com* |
83
+ | `PHONE` | Phone numbers | *+1-555-234-5678*, *07700 900123* |
84
+ | `ADDRESS` | Postal addresses | *42 Baker Street, London* |
85
+ | `SSN` | Social security numbers | *452-78-9012* |
86
+ | `CREDITCARD` | Payment card numbers | *4111-1111-1111-1111* |
87
+ | `IP` | IPv4 addresses | *192.168.1.104* |
88
+ | `DATE` | Dates of birth and event dates | *1990-07-12*, *March 15, 2024* |
89
+ | `ORG` | Organisation names | *Acme Corp*, *St. Mary's Hospital* |
90
+ | `USERNAME` | Handles and usernames | *john_doe*, *@alice_m* |
91
+ | `PASSPORT` | Passport numbers | *A7843921* |
92
+ | `DRIVERSLICENSE` | Driver's licence numbers | *K482910* |
93
+
94
+ ---
95
+
96
+ ## Quickstart
97
+
98
+ ### Installation
99
+
100
+ ```bash
101
+ pip install torch transformers
102
+ ```
103
+
104
+ ### Load the Model
105
+
106
+ ```python
107
+ import torch
108
+ from context_filter_v2_train import ContextFilterInference
109
+
110
+ engine = ContextFilterInference("./context_filter_v2")
111
+ ```
112
+
113
+ ### Redact Mode — `[ENTITY]` brackets
114
+
115
+ ```python
116
+ result = engine.filter(
117
+ "My name is Andrew and my Gmail is Andrew@gmail.com and live in Sweden",
118
+ mode="redact",
119
+ )
120
+
121
+ print(result["filtered"])
122
+ # My name is [PERSON] and my Gmail is [EMAIL] and live in Sweden
123
+ ```
124
+
125
+ ### Label Mode — semantic placeholders
126
+
127
+ ```python
128
+ result = engine.filter(
129
+ "My name is Andrew and my Gmail is Andrew@gmail.com and live in Sweden",
130
+ mode="label",
131
+ )
132
+
133
+ print(result["filtered"])
134
+ # My name is private_person and my Gmail is private_email and live in Sweden
135
+ ```
136
+
137
+ ### Entity Spans with Confidence
138
+
139
+ ```python
140
+ for entity in result["entities"]:
141
+ print(entity)
142
+
143
+ # {'type': 'PERSON', 'start': 11, 'end': 17, 'text': 'Andrew', 'confidence': 0.987}
144
+ # {'type': 'EMAIL', 'start': 33, 'end': 49, 'text': 'Andrew@gmail.com', 'confidence': 0.995}
145
+ ```
146
+
147
+ ### Batch Processing
148
+
149
+ ```python
150
+ texts = [
151
+ "Call Sarah at +1-555-234-5678.",
152
+ "Server 192.168.1.1 accessed by john_doe on 2024-03-15.",
153
+ "Account: Michael Chen, SSN: 452-78-9012.",
154
+ ]
155
+
156
+ results = engine.filter_batch(texts, mode="redact")
157
+
158
+ for r in results:
159
+ print(r["filtered"])
160
+
161
+ # Call Sarah at [PHONE].
162
+ # Server [IP] accessed by [USERNAME] on [DATE].
163
+ # Account: [PERSON], SSN: [SSN].
164
+ ```
165
+
166
+ ### Disable Regex Hybrid (model-only predictions)
167
+
168
+ ```python
169
+ result = engine.filter(text, mode="redact", regex_hybrid=False)
170
+ ```
171
+
172
+ ---
173
+
174
+ ## Output Format Reference
175
+
176
+ ### `filter()` return value
177
+
178
+ ```python
179
+ {
180
+ "filtered": str, # processed text with PII replaced
181
+ "entities": [
182
+ {
183
+ "type": str, # entity class name (e.g. "EMAIL")
184
+ "start": int, # character start offset in original text
185
+ "end": int, # character end offset in original text
186
+ "text": str, # original PII span
187
+ "confidence": float, # softmax confidence [0.0 – 1.0]
188
+ },
189
+ ...
190
+ ]
191
+ }
192
+ ```
193
+
194
+ ### Mode comparison
195
+
196
+ | Input | `mode="label"` | `mode="redact"` |
197
+ |---|---|---|
198
+ | `Andrew@gmail.com` | `private_email` | `[EMAIL]` |
199
+ | `Jane Smith` | `private_person` | `[PERSON]` |
200
+ | `+1-555-234-5678` | `private_phone` | `[PHONE]` |
201
+ | `452-78-9012` | `private_ssn` | `[SSN]` |
202
+ | `192.168.1.104` | `private_ip` | `[IP]` |
203
+ | `A7843921` | `private_passport` | `[PASSPORT]` |
204
+
205
+ ---
206
+
207
+ ## Performance Characteristics
208
+
209
+ | Hardware | Throughput | Latency (512 tok) |
210
+ |---|---|---|
211
+ | A100 40GB (bfloat16) | ~85,000 tok/s | ~6 ms |
212
+ | RTX 4090 (bfloat16) | ~52,000 tok/s | ~10 ms |
213
+ | RTX 3080 (bfloat16) | ~28,000 tok/s | ~18 ms |
214
+ | CPU (int8, 16 cores) | ~4,200 tok/s | ~120 ms |
215
+
216
+ *Throughput measured at batch size 32. Latency measured at batch size 1.*
217
+
218
+ ### Memory Footprint
219
+
220
+ | Precision | VRAM |
221
+ |---|---|
222
+ | bfloat16 (default) | ~152 MB |
223
+ | float32 | ~304 MB |
224
+ | int8 quantised | ~76 MB |
225
+
226
+ ---
227
+
228
+ ## Intended Use Cases
229
+
230
+ | Use Case | Description |
231
+ |---|---|
232
+ | **Log sanitisation** | Strip PII from server logs, audit trails, and telemetry pipelines before storage |
233
+ | **Document redaction** | Redact legal, medical, or HR documents before sharing or archival |
234
+ | **Data anonymisation** | Pre-process training datasets to remove personal identifiers |
235
+ | **API response filtering** | Inline filter for LLM or API outputs before they reach end users |
236
+ | **Compliance pipelines** | GDPR / CCPA / HIPAA pre-processing layer |
237
+ | **Chat moderation** | Real-time PII removal in messaging or support platforms |
238
+ | **IDE / copilot integration** | Client-side PII guard before code or prompts are sent to remote APIs |
239
+
240
+ ---
241
+
242
+ ## Hybrid Detection Strategy
243
+
244
+ Context-Filter uses a two-layer detection approach for maximum recall:
245
+
246
+ **Layer 1 — Neural Model**: The transformer encoder reads full sentence context to detect ambiguous PII such as person names, organisation names, and contextual dates that regex cannot identify.
247
+
248
+ **Layer 2 — Regex Safety Net**: A deterministic pass using compiled regular expressions guarantees recall on structurally defined formats (email, IPv4, SSN, credit card, phone, passport, driver's licence) regardless of model confidence.
249
+
250
+ The two layers are merged with entity-level deduplication: spans already found by the model are not double-tagged. This combination eliminates the false-negative failure mode of pure-neural approaches while maintaining the contextual understanding that regex-only tools cannot provide.
251
+
252
+ ---
253
+
254
+ ## Limitations
255
+
256
+ - **English-Primary**: Training templates are predominantly English-language. Names and organisation names in non-Latin scripts may have reduced recall.
257
+ - **Highly Nested PII**: Overlapping or recursively nested PII spans (e.g., an email containing a person's name as the local part) are resolved to the outermost detected entity.
258
+ - **Synthetic Training Data**: The model was trained entirely on procedurally generated examples. Domain-specific PII formats not covered by the synthetic generator (e.g., jurisdiction-specific ID numbers) may have lower recall until fine-tuned on real-world samples.
259
+ - **Contextual Dates**: Generic dates (e.g., publication dates, historical dates) may occasionally be tagged as DATE. Post-filter confidence thresholding (e.g., `confidence > 0.8`) can reduce these false positives.
260
+ - **No Document Structure Awareness**: The model operates on raw token sequences without awareness of HTML, Markdown, or JSON structure. Strip formatting before passing structured documents.
261
+
262
+ ---
263
+
264
+ ## License
265
+
266
+ Context-Filter is released under the **Apache License 2.0**.
267
+
268
+ ---
269
+
270
+ <div align="center">
271
+ <sub>Context-Filter — purpose-built for privacy, not adapted for it.</sub>
272
+ </div>