Text Classification
Transformers
Safetensors
English
emcoder
feature-extraction
emotion-recognition
bayesian-deep-learning
mc-dropout
uncertainty-quantification
multi-label-classification
custom_code
Eval Results (legacy)
Instructions to use yezdata/EmCoder with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use yezdata/EmCoder with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="yezdata/EmCoder", trust_remote_code=True)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("yezdata/EmCoder", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
| { | |
| "bayesian_train": true, | |
| "loss_weights": "log", | |
| "tokenized_ds_dir": "data/goemotions_v1_seq512", | |
| "encoder_lr": 0.00001, | |
| "head_lr": 0.0005, | |
| "lr_warmup": 0.05, | |
| "weight_decay": 0.01, | |
| "batch_size": 32, | |
| "gradient_accumulation_steps": 8, | |
| "num_epochs": 10 | |
| } |