--- tags: autotrain language: en widget: - text: "I love AutoTrain 🤗" datasets: - justpyschitry/autotrain-data-Wikipeida_Article_Classifier_by_Chap co2_eq_emissions: 16.816741650923202 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 1022634735 - CO2 Emissions (in grams): 16.816741650923202 ## Validation Metrics - Loss: 0.4373569190502167 - Accuracy: 0.9027552674230146 - Macro F1: 0.8938134766263609 - Micro F1: 0.9027552674230146 - Weighted F1: 0.9023653852553881 - Macro Precision: 0.8970541297231431 - Micro Precision: 0.9027552674230146 - Weighted Precision: 0.903514305510645 - Macro Recall: 0.892665778987219 - Micro Recall: 0.9027552674230146 - Weighted Recall: 0.9027552674230146 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/justpyschitry/autotrain-Wikipeida_Article_Classifier_by_Chap-1022634735 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("justpyschitry/autotrain-Wikipeida_Article_Classifier_by_Chap-1022634735", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("justpyschitry/autotrain-Wikipeida_Article_Classifier_by_Chap-1022634735", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```