Text Classification
Transformers
Safetensors
English
emcoder
feature-extraction
emotion-recognition
bayesian-deep-learning
mc-dropout
uncertainty-quantification
multi-label-classification
custom_code
Eval Results (legacy)
Instructions to use yezdata/EmCoder with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use yezdata/EmCoder with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="yezdata/EmCoder", trust_remote_code=True)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("yezdata/EmCoder", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -63,8 +63,8 @@ EmCoder achieves competitive F1-score with its compact size (~35% smaller than R
|
|
| 63 |
|
| 64 |
|
| 65 |
## How to use
|
| 66 |
-
EmCoder uses the `roberta-base` tokenizer for correct token-to-embedding mapping.
|
| 67 |
### 1. Setup & Tokenization
|
|
|
|
| 68 |
```python
|
| 69 |
import torch
|
| 70 |
from transformers import AutoModel, AutoTokenizer
|
|
|
|
| 63 |
|
| 64 |
|
| 65 |
## How to use
|
|
|
|
| 66 |
### 1. Setup & Tokenization
|
| 67 |
+
EmCoder uses the `roberta-base` tokenizer for correct token-to-embedding mapping.
|
| 68 |
```python
|
| 69 |
import torch
|
| 70 |
from transformers import AutoModel, AutoTokenizer
|