Text Classification
Transformers
Safetensors
English
emcoder
feature-extraction
emotion-recognition
bayesian-deep-learning
mc-dropout
uncertainty-quantification
multi-label-classification
custom_code
Eval Results (legacy)
Instructions to use yezdata/EmCoder with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use yezdata/EmCoder with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="yezdata/EmCoder", trust_remote_code=True)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("yezdata/EmCoder", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -18,7 +18,7 @@ metrics:
|
|
| 18 |
- recall
|
| 19 |
- f1
|
| 20 |
model-index:
|
| 21 |
-
- name: EmCoder
|
| 22 |
results:
|
| 23 |
- task:
|
| 24 |
type: text-classification
|
|
@@ -56,14 +56,14 @@ EmCoder is optimized for **MC Dropout inference**.
|
|
| 56 |
EmCoder achieves competitive F1-scores while being ~35% smaller than RoBERTa-base and ~45% smaller than ModernBERT, offering a superior efficiency-to-uncertainty ratio.
|
| 57 |
| Model | Precision | Recall | F1-Score | Params |
|
| 58 |
| :--- | :--- | :--- | :--- | :--- |
|
| 59 |
-
| **EmCoder
|
| 60 |
| Google BERT (Original) | 0.400 | 0.630 | 0.460 | 110M |
|
| 61 |
| RoBERTa-base | 0.575 | 0.396 | 0.450 | 125M |
|
| 62 |
| ModernBERT-base | 0.652 | 0.443 | 0.500 | 149M |
|
| 63 |
|
| 64 |
|
| 65 |
## How to use
|
| 66 |
-
EmCoder
|
| 67 |
### 1. Setup & Tokenization
|
| 68 |
```python
|
| 69 |
import torch
|
|
|
|
| 18 |
- recall
|
| 19 |
- f1
|
| 20 |
model-index:
|
| 21 |
+
- name: EmCoder
|
| 22 |
results:
|
| 23 |
- task:
|
| 24 |
type: text-classification
|
|
|
|
| 56 |
EmCoder achieves competitive F1-scores while being ~35% smaller than RoBERTa-base and ~45% smaller than ModernBERT, offering a superior efficiency-to-uncertainty ratio.
|
| 57 |
| Model | Precision | Recall | F1-Score | Params |
|
| 58 |
| :--- | :--- | :--- | :--- | :--- |
|
| 59 |
+
| **EmCoder** | **0.408** | **0.495** | **0.440** | **82.1M** |
|
| 60 |
| Google BERT (Original) | 0.400 | 0.630 | 0.460 | 110M |
|
| 61 |
| RoBERTa-base | 0.575 | 0.396 | 0.450 | 125M |
|
| 62 |
| ModernBERT-base | 0.652 | 0.443 | 0.500 | 149M |
|
| 63 |
|
| 64 |
|
| 65 |
## How to use
|
| 66 |
+
EmCoder uses the `roberta-base` tokenizer for correct token-to-embedding mapping.
|
| 67 |
### 1. Setup & Tokenization
|
| 68 |
```python
|
| 69 |
import torch
|