--- tags: - autotrain - text-classification language: - en widget: - text: I love AutoTrain datasets: - NicholasSynovic/autotrain-data-luc-comp429-victorian-authorship-classification co2_eq_emissions: emissions: 4.1359796275464005 license: agpl-3.0 metrics: - accuracy - f1 - recall - bertscore pipeline_tag: text-classification --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 52472123757 - CO2 Emissions (in grams): 4.1360 This model reuses and extends a Bert model trained on [NicholasSynovic/Free-AutoTrain-VEAA](https://huggingface.co/datasets/NicholasSynovic/Free-AutoTrain-VEAA) ## Validation Metrics - Loss: 1.425 - Accuracy: 0.636 - Macro F1: 0.504 - Micro F1: 0.636 - Weighted F1: 0.624 - Macro Precision: 0.523 - Micro Precision: 0.636 - Weighted Precision: 0.630 - Macro Recall: 0.508 - Micro Recall: 0.636 - Weighted Recall: 0.636 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/NicholasSynovic/autotrain-luc-comp429-victorian-authorship-classification-52472123757 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("NicholasSynovic/AutoTrain-LUC-COMP429-VEAA-Classification", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("NicholasSynovic/autotrain-luc-comp429-victorian-authorship-classification-52472123757", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```