--- tags: autotrain language: unk widget: - text: "I love AutoTrain 🤗" datasets: - zainalq7/autotrain-data-NLU_crypto_sentiment_analysis co2_eq_emissions: 0.005300030853867218 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 754123133 - CO2 Emissions (in grams): 0.005300030853867218 ## Validation Metrics - Loss: 0.387116938829422 - Accuracy: 0.8658536585365854 - Macro F1: 0.7724053724053724 - Micro F1: 0.8658536585365854 - Weighted F1: 0.8467166979362101 - Macro Precision: 0.8232219717155155 - Micro Precision: 0.8658536585365854 - Weighted Precision: 0.8516026874759421 - Macro Recall: 0.7642089093701996 - Micro Recall: 0.8658536585365854 - Weighted Recall: 0.8658536585365854 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/zainalq7/autotrain-NLU_crypto_sentiment_analysis-754123133 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("zainalq7/autotrain-NLU_crypto_sentiment_analysis-754123133", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("zainalq7/autotrain-NLU_crypto_sentiment_analysis-754123133", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```