Token Classification
Transformers
TensorBoard
Safetensors
xlm-roberta
Generated from Trainer
language-identification
codeswitching
Instructions to use DerivedFunction/polyglot-tagger-v2.2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use DerivedFunction/polyglot-tagger-v2.2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="DerivedFunction/polyglot-tagger-v2.2")# Load model directly from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("DerivedFunction/polyglot-tagger-v2.2") model = AutoModelForTokenClassification.from_pretrained("DerivedFunction/polyglot-tagger-v2.2") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -4,6 +4,7 @@ license: mit
|
|
| 4 |
base_model: xlm-roberta-base
|
| 5 |
tags:
|
| 6 |
- generated_from_trainer
|
|
|
|
| 7 |
metrics:
|
| 8 |
- precision
|
| 9 |
- recall
|
|
@@ -341,4 +342,4 @@ The following hyperparameters were used during training:
|
|
| 341 |
- Transformers 5.0.0
|
| 342 |
- Pytorch 2.10.0+cu128
|
| 343 |
- Datasets 4.0.0
|
| 344 |
-
- Tokenizers 0.22.2
|
|
|
|
| 4 |
base_model: xlm-roberta-base
|
| 5 |
tags:
|
| 6 |
- generated_from_trainer
|
| 7 |
+
- language-identification
|
| 8 |
metrics:
|
| 9 |
- precision
|
| 10 |
- recall
|
|
|
|
| 342 |
- Transformers 5.0.0
|
| 343 |
- Pytorch 2.10.0+cu128
|
| 344 |
- Datasets 4.0.0
|
| 345 |
+
- Tokenizers 0.22.2
|