Text Classification
Transformers
Safetensors
English
emcoder
feature-extraction
emotion-recognition
bayesian-deep-learning
mc-dropout
uncertainty-quantification
multi-label-classification
custom_code
Eval Results (legacy)
Instructions to use yezdata/EmCoder with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use yezdata/EmCoder with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="yezdata/EmCoder", trust_remote_code=True)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("yezdata/EmCoder", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -173,9 +173,14 @@ $$
|
|
| 173 |
|
| 174 |
|
| 175 |
|
| 176 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 177 |

|
| 178 |
|
|
|
|
| 179 |
**Confusion matrix**
|
| 180 |

|
| 181 |
|
|
|
|
| 173 |
|
| 174 |
|
| 175 |
|
| 176 |
+
|
| 177 |
+
**Model uncertainty quantification on GoEmotions test set**
|
| 178 |
+
|
| 179 |
+
The distribution demonstrates strong calibration, as the highest error density correlates with increased epistemic uncertainty. While most high-probability predictions are correct, a small fragment of overconfident incorrects remains likely due to dataset bias or linguistic nuances like sarcasm. These outliers identify a clear opportunity for further refinement using **temperature scaling**.
|
| 180 |
+
|
| 181 |

|
| 182 |
|
| 183 |
+
|
| 184 |
**Confusion matrix**
|
| 185 |

|
| 186 |
|