Text Classification
Transformers
Safetensors
English
bert
fill-mask
BERT
NeuroBERT
transformer
pre-training
nlp
tiny-bert
edge-ai
low-resource
micro-nlp
quantized
iot
wearable-ai
offline-assistant
intent-detection
real-time
smart-home
embedded-systems
command-classification
toy-robotics
voice-ai
eco-ai
english
lightweight
mobile-nlp
ner
Update README.md
Browse files
README.md
CHANGED
|
@@ -56,7 +56,7 @@ library_name: transformers
|
|
| 56 |
[](#)
|
| 57 |
[](#)
|
| 58 |
|
| 59 |
-
Say hello to `NeuroBERT-Mini`, the **game-changing NLP model** that brings **world-class performance** to **low-resource devices**! Fine-tuned from the robust `google-bert/bert-base-uncased`, this **ultra-compact** model weighs in at just **~35MB** with **~
|
| 60 |
|
| 61 |
---
|
| 62 |
|
|
@@ -179,7 +179,7 @@ Input: The capital of France is [MASK].
|
|
| 179 |
- **MNLI (MultiNLI)**: Built for natural language inference.
|
| 180 |
- **All-NLI**: Enhanced with extra NLI data for smarter understanding.
|
| 181 |
|
| 182 |
-
*Fine-Tuning Brilliance*: Starting from `google-bert/bert-base-uncased` (12 layers, 768 hidden, 110M parameters), NeuroBERT-Mini was fine-tuned to a streamlined 4 layers, 256 hidden, and ~
|
| 183 |
|
| 184 |
---
|
| 185 |
|
|
|
|
| 56 |
[](#)
|
| 57 |
[](#)
|
| 58 |
|
| 59 |
+
Say hello to `NeuroBERT-Mini`, the **game-changing NLP model** that brings **world-class performance** to **low-resource devices**! Fine-tuned from the robust `google-bert/bert-base-uncased`, this **ultra-compact** model weighs in at just **~35MB** with **~10M parameters**, delivering an **outstanding ~95% accuracy** on tasks like masked language modeling, NER, and text classification. Perfect for **IoT devices**, **mobile apps**, **wearables**, and **edge AI systems**, NeuroBERT-Mini is your ticket to **fast, offline, and context-aware** NLP in 2025! 🌟
|
| 60 |
|
| 61 |
---
|
| 62 |
|
|
|
|
| 179 |
- **MNLI (MultiNLI)**: Built for natural language inference.
|
| 180 |
- **All-NLI**: Enhanced with extra NLI data for smarter understanding.
|
| 181 |
|
| 182 |
+
*Fine-Tuning Brilliance*: Starting from `google-bert/bert-base-uncased` (12 layers, 768 hidden, 110M parameters), NeuroBERT-Mini was fine-tuned to a streamlined 4 layers, 256 hidden, and ~10M parameters, creating a compact yet powerful NLP solution for edge AI! 🪄
|
| 183 |
|
| 184 |
---
|
| 185 |
|