autoevaluator's picture
Add evaluation results on the default config and validation split of acronym_identification
f416afc
|
raw
history blame
No virus
4.49 kB
---
language: en
tags:
- autotrain
datasets:
- lewtun/autotrain-data-acronym-identification
- acronym_identification
widget:
- text: I love AutoTrain 🤗
co2_eq_emissions: 10.435358044493652
model-index:
- name: autotrain-demo
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: acronym_identification
type: acronym_identification
args: default
metrics:
- type: accuracy
value: 0.9708090976211485
name: Accuracy
- task:
type: token-classification
name: Token Classification
dataset:
name: acronym_identification
type: acronym_identification
config: default
split: train
metrics:
- type: accuracy
value: 0.9790777669399117
name: Accuracy
verified: true
- type: precision
value: 0.9197835301644851
name: Precision
verified: true
- type: recall
value: 0.946479027789208
name: Recall
verified: true
- type: f1
value: 0.9329403493591477
name: F1
verified: true
- type: loss
value: 0.06360606849193573
name: loss
verified: true
- task:
type: token-classification
name: Token Classification
dataset:
name: acronym_identification
type: acronym_identification
config: default
split: validation
metrics:
- type: accuracy
value: 0.9758059763069488
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTE2M2JlMDBhNDVlMDNmMmUxYTllYTM3MTQyYWIyYmI4OWM4MWM2MDYzNWI5ZTY4YmE4ZTYxODYzNGY4ZTU4NSIsInZlcnNpb24iOjF9.lplj1GdLZOa7EQq_-eDkWMOndxM7I3JMInAPS8Lh1ym-6NZFe8HAVtPtw6uYE1kw3bYSFLiJeDhu17qr4W0LCQ
- type: precision
value: 0.9339528708927979
name: Precision
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYThjZDI5MTJjZDhmMzgxYmRlYmMxNjRhZTgyM2JkNmVjYWJiZmEyOTIxZDIzNTA0NWFiMmI0YjA0OWUwMmIzNyIsInZlcnNpb24iOjF9.ZGHY1xycw3oPLCswGga9vkSb-EIXjyf-euFLqPYh96B1N6LGRymo-r85cEwTp1Kr-uR6qVeUMrX2mmwL7O2wAw
- type: recall
value: 0.9157175398633257
name: Recall
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjk5Mzk3MzRlYjg2ZWNmOTZlN2QwZjQ2Y2ZlNzU2NzljODU1NmRmNWVjMmIxODdhZTE4NmQwYTNkY2E2MGZlZSIsInZlcnNpb24iOjF9.JShPPAdznEKVso15qYmJ9OcdiV_W4pE1wrHVZ_A_-bkhboIUATN-rXK2KZmfq1nS3Jxip3qn9GYeFxP9tGU0Dg
- type: f1
value: 0.924745317121262
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzlmMDM2ODVmYmEwMDJhZGI5MWI3NjRlNzEwMWYyY2NiMmQ0NjdkZjc2NzFiZmI3Y2VmYWM0MjlmYzFlN2E5NCIsInZlcnNpb24iOjF9.rpqmuMdlxFevTn374B5_eGMlAF79VVGhRIx3ksd2p-2_kHyhRwnjxqs5elQpH_tIHUGSIM49hLqAFi14EKGiAA
- type: loss
value: 0.07582829147577286
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjYzMDlmNzNiMmRlYmY0ODZhMWM0ZmI3ODJhNDRiM2E1YjczMzBlNDNjM2I2ODBiMzc5NDEzZGIxMjAzMWM2YiIsInZlcnNpb24iOjF9.G7UXqxGnqnfgdze04NsyRsO-SAqNXV81kK7Jp_SYjVP_4gbL0ovV9JDd3hk_h0pWSMVNjgFhI6cwZGXZLsTJBw
- type: auc
value: NaN
name: AUC
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYWQyNGFiMDc1ZDgwNGQ5NjFmNzhmZDM4NTYwNzEyYzBhNDQxMzc2ZDczOGJhNWFhMGMwOWE4MTRhNTY3NGQxYiIsInZlcnNpb24iOjF9.WcXce6HBjzSEqOOJoVJU33kF-MjwVZj2a4OWuUd_GIv32NUJVueRWzzHdvUDI_KlPvvvamQJgqN-cCILFaTRAw
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 7324788
- CO2 Emissions (in grams): 10.435358044493652
## Validation Metrics
- Loss: 0.08991389721632004
- Accuracy: 0.9708090976211485
- Precision: 0.8998421675654347
- Recall: 0.9309429854401959
- F1: 0.9151284109149278
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/lewtun/autotrain-acronym-identification-7324788
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("lewtun/autotrain-acronym-identification-7324788", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("lewtun/autotrain-acronym-identification-7324788", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```