Token Classification
Transformers
TensorBoard
Safetensors
xlm-roberta
Generated from Trainer
language-identification
codeswitching
Instructions to use DerivedFunction/polyglot-tagger-v2.2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use DerivedFunction/polyglot-tagger-v2.2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="DerivedFunction/polyglot-tagger-v2.2")# Load model directly from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("DerivedFunction/polyglot-tagger-v2.2") model = AutoModelForTokenClassification.from_pretrained("DerivedFunction/polyglot-tagger-v2.2") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -5,6 +5,7 @@ base_model: xlm-roberta-base
|
|
| 5 |
tags:
|
| 6 |
- generated_from_trainer
|
| 7 |
- language-identification
|
|
|
|
| 8 |
metrics:
|
| 9 |
- precision
|
| 10 |
- recall
|
|
|
|
| 5 |
tags:
|
| 6 |
- generated_from_trainer
|
| 7 |
- language-identification
|
| 8 |
+
- codeswitching
|
| 9 |
metrics:
|
| 10 |
- precision
|
| 11 |
- recall
|