--- tags: - autotrain - text-classification language: - en widget: - text: "Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text. Given an initial text as prompt, it will produce text that continues the prompt.\nThe architecture is a standard transformer network (with a few engineering tweaks) with the unprecedented size of 2048-token-long context and 175 billion parameters (requiring 800 GB of storage). The training method is \"generative pretraining\", meaning that it is trained to predict what the next token is. The model demonstrated strong few-shot learning on many text-based tasks." example_title: "Wikipedia" - text: "Wikipedia is a free online encyclopedia that provides information to millions of people around the world. It is an open source website, meaning that anyone can edit and contribute articles to the encyclopedia. The website is maintained by a huge network of volunteers who are dedicated to making sure that the information is accurate and up to date. Wikipedia is an invaluable resource for people looking for reliable and accurate information on any topic.\nWikipedia is one of the most popular websites in the world, and its articles are often referenced in academic papers and other sources. The website is constantly being updated, and it is estimated that more than 500 million people visit it every month. It is an excellent source for basic information and for more in-depth research. It is important to remember though, that Wikipedia is not an academic source and should be used with caution when doing more serious research." example_title: "GPT-3" datasets: - freddiezhang/autotrain-data-honor co2_eq_emissions: emissions: 14.46129742532204 --- HonOR, standing for "Hyper-parameter tuned computer-generated text objectification utilizing BERTForSeqenceClassification" is a binary text classification model built with BertForSequenceClassification. This model was built to explore possibilities for zero-shot classification of texts in a wide range of domains. For more information, please see the [model card](https://huggingface.co/freddiezhang/honor/blob/main/modelcard.md). # Model information - Problem type: Binary Classification - Model ID: 2514377451 - CO2 Emissions (in grams): 14.4613 ## Validation metrics - Loss: 0.055 - Accuracy: 0.989 - Precision: 0.995 - Recall: 0.983 - AUC: 0.998 - F1: 0.989 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/freddiezhang/autotrain-honor-2514377451 ``` Or a Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("freddiezhang/autotrain-honor-2514377451", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("freddiezhang/autotrain-honor-2514377451", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```