metadata
tags: autotrain
language: en
widget:
- text: I am still waiting on my card?
datasets:
- banking77
model-index:
- name: BERT-Banking77
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: BANKING77
type: banking77
metrics:
- name: Accuracy
type: accuracy
value: 92.64
- name: Macro F1
type: macro-f1
value: 92.64
- name: Weighted F1
type: weighted-f1
value: 92.6
- task:
type: text-classification
name: Text Classification
dataset:
name: banking77
type: banking77
config: default
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.9275974025974026
verified: true
- name: Precision Macro
type: precision
value: 0.9305185253845069
verified: true
- name: Precision Micro
type: precision
value: 0.9275974025974026
verified: true
- name: Precision Weighted
type: precision
value: 0.9305185253845071
verified: true
- name: Recall Macro
type: recall
value: 0.9275974025974028
verified: true
- name: Recall Micro
type: recall
value: 0.9275974025974026
verified: true
- name: Recall Weighted
type: recall
value: 0.9275974025974026
verified: true
- name: F1 Macro
type: f1
value: 0.927623314966026
verified: true
- name: F1 Micro
type: f1
value: 0.9275974025974026
verified: true
- name: F1 Weighted
type: f1
value: 0.927623314966026
verified: true
- name: loss
type: loss
value: 0.3199225962162018
verified: true
co2_eq_emissions: 0.03330651014155927
BERT-Banking77
Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 940131041
- CO2 Emissions (in grams): 0.03330651014155927
Validation Metrics
- Loss: 0.3505457043647766
- Accuracy: 0.9263261296660118
- Macro F1: 0.9268371013605569
- Micro F1: 0.9263261296660118
- Weighted F1: 0.9259954221865809
- Macro Precision: 0.9305746406646502
- Micro Precision: 0.9263261296660118
- Weighted Precision: 0.929031563971418
- Macro Recall: 0.9263724620088746
- Micro Recall: 0.9263261296660118
- Weighted Recall: 0.9263261296660118
Usage
You can use cURL to access this model:
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/philschmid/autotrain-does-it-work-940131041
Or Python API:
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_id = 'philschmid/BERT-Banking77'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(model_id)
classifier = pipeline('text-classification', tokenizer=tokenizer, model=model)
classifier('What is the base of the exchange rates?')