modelId stringlengths 6 107 | label list | readme stringlengths 0 56.2k | readme_len int64 0 56.2k |
|---|---|---|---|
responsibility-framing/predict-perception-xlmr-focus-assassin | [
"LABEL_0"
] | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-xlmr-focus-assassin
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pre... | 8,116 |
responsibility-framing/predict-perception-xlmr-focus-object | [
"LABEL_0"
] | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-xlmr-focus-object
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predi... | 8,013 |
responsibility-framing/predict-perception-xlmr-cause-none | [
"LABEL_0"
] | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-xlmr-cause-none
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict... | 10,685 |
Jeevesh8/goog_bert_ft_cola-40 | null | Entry not found | 15 |
valurank/distilroberta-clickbait | [
"CLICKBAIT",
"NOT_CLICKBAIT"
] | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: distilroberta-clickbait
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-cl... | 1,674 |
Jeevesh8/goog_bert_ft_cola-41 | null | Entry not found | 15 |
elozano/bert-base-cased-news-category | [
"Automobile",
"Entertainment",
"Politics",
"Science",
"Sports",
"Technology",
"World"
] | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-42 | null | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-44 | null | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-43 | null | Entry not found | 15 |
FourthBrain/bert_model_reddit_tsla | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert_model_reddit_tsla
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this com... | 1,182 |
oigele/Fb_improved_zeroshot | [
"contradiction",
"entailment",
"neutral"
] | ---
pipeline_tag: zero-shot-classification
datasets:
- multi_nli
widget:
- text: "natural language processing"
candidate_labels: "Location & Address, Employment, Organizational, Name, Service, Studies, Science"
hypothesis_template: "This is {}."
---
# Fb_improved_zeroshot
Zero-Shot Model designed to classify aca... | 2,638 |
Jeevesh8/goog_bert_ft_cola-45 | null | Entry not found | 15 |
textattack/distilbert-base-cased-MRPC | null | Entry not found | 15 |
ynie/roberta-large_conv_contradiction_detector_v0 | null | Entry not found | 15 |
valurank/distilroberta-current | [
"Current",
"Not_current"
] | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: distilroberta-current
results: []
---
# distilroberta-current
This model classifies articles as current (covering or discussing current events) or not current (not relating to current events).
The model is a fine-tuned version of [distilroberta... | 1,747 |
IlyaGusev/rubertconv_toxic_clf | [
"neutral",
"toxic"
] | ---
language:
- ru
tags:
- text-classification
license: apache-2.0
---
# RuBERTConv Toxic Classifier
## Model description
Based on [rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model
## Intended uses & limitations
#### How to use
Colab: [link](https://col... | 1,312 |
textattack/albert-base-v2-QQP | null | ## TextAttack Model Card
This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 32, a learning
rate of 5e-05, and a maximum sequence length of 128.
Since this was a classi... | 620 |
valurank/finetuned-distilbert-news-article-categorization | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7"
] | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: finetuned-distilbert-news-article-categorization
results: []
---
### finetuned-distilbert-news-article-catgorization
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the news_arti... | 1,301 |
Jeevesh8/goog_bert_ft_cola-46 | null | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-49 | null | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-47 | null | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-48 | null | Entry not found | 15 |
transformersbook/distilbert-base-uncased-distilled-clinc | [
"accept_reservations",
"account_blocked",
"alarm",
"application_status",
"apr",
"are_you_a_bot",
"balance",
"bill_balance",
"bill_due",
"book_flight",
"book_hotel",
"calculator",
"calendar",
"calendar_update",
"calories",
"cancel",
"cancel_reservation",
"car_rental",
"card_declin... | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
... | 2,585 |
Jeevesh8/goog_bert_ft_cola-51 | null | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-50 | null | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-52 | null | Entry not found | 15 |
textattack/distilbert-base-uncased-imdb | null | ## TextAttack Model Card
This `distilbert-base-uncased` model was fine-tuned for sequence classification using TextAttack
and the imdb dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 16, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was... | 615 |
Jeevesh8/goog_bert_ft_cola-53 | null | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-55 | null | Entry not found | 15 |
coderpotter/adversarial-paraphrasing-detector | null | This model is a paraphrase detector trained on the Adversarial Paraphrasing datasets described and used in this paper: https://aclanthology.org/2021.acl-long.552/.
Github repository: https://github.com/Advancing-Machine-Human-Reasoning-Lab/apt.git
Please cite the following if you use this model:
```bib
@inproceedings{... | 2,231 |
fhamborg/roberta-targeted-sentiment-classification-newsarticles | null | ---
language:
- en
tags:
- text-classification
- sentiment-analysis
- sentiment-classification
- targeted-sentiment-classification
- target-depentent-sentiment-classification
license: "apache-2.0"
datasets: "fhamborg/news_sentiment_newsmtsc"
---
# NewsSentiment: easy-to-use, high-quality target-dependent senti... | 2,893 |
Jeevesh8/goog_bert_ft_cola-54 | null | Entry not found | 15 |
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus26 | [
"ALE",
"ALG",
"ALX",
"AMM",
"ASW",
"BAG",
"BAS",
"BEI",
"BEN",
"CAI",
"DAM",
"DOH",
"FES",
"JED",
"JER",
"KHA",
"MOS",
"MSA",
"MUS",
"RAB",
"RIY",
"SAL",
"SAN",
"SFX",
"TRI",
"TUN"
] | ---
language:
- ar
license: apache-2.0
widget:
- text: "عامل ايه ؟"
---
# CAMeLBERT-Mix DID Madar Corpus26 Model
## Model description
**CAMeLBERT-Mix DID Madar Corpus26 Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-c... | 2,953 |
textattack/bert-base-uncased-QNLI | null | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-56 | null | Entry not found | 15 |
textattack/distilbert-base-uncased-MNLI | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-57 | null | Entry not found | 15 |
Wi/arxiv-topics-distilbert-base-cased | [
"Astrophysics",
"Computer Science",
"Condensed Matter",
"Economics",
"Electrical Engineering and Systems Science",
"General Relativity and Quantum Cosmology",
"High Energy Physics - Experiment",
"High Energy Physics - Lattice",
"High Energy Physics - Phenomenology",
"High Energy Physics - Theory",... | ---
language: en
license: apache-2.0
tags:
- arxiv
- topic-classification
- distilbert
widget:
- text: "Title: The Design of Radio Telescope Array Configurations using Multiobjective\n\
\ Optimization: Imaging Performance versus Cable Length\nAbstract: The next generation\
\ of radio telescope interferometric ... | 4,702 |
Tejas3/distillbert_base_uncased_80_equal | [
"NEGATIVE",
"NEUTRAL",
"POSITIVE"
] | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-58 | null | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-59 | null | Entry not found | 15 |
Huffon/klue-roberta-base-nli | [
"ENTAILMENT",
"NEUTRAL",
"CONTRADICTION"
] | ---
language: ko
tags:
- roberta
- nli
datasets:
- klue
---
| 60 |
Jeevesh8/goog_bert_ft_cola-60 | null | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-61 | null | Entry not found | 15 |
google/tapas-base-finetuned-tabfact | null | ---
language: en
tags:
- tapas
- sequence-classification
license: apache-2.0
datasets:
- tab_fact
---
# TAPAS base model fine-tuned on Tabular Fact Checking (TabFact)
This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_tabfact_inter_masklm_base_reset`... | 4,867 |
Intel/roberta-base-mrpc | [
"equivalent",
"not_equivalent"
] | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: roberta-base-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- nam... | 1,493 |
Guscode/DKbert-hatespeech-detection | null | ---
language:
- da
tags:
- Hatespeech
- Danish
- BERT
license: mit
datasets:
- DKHate - OffensEval2020
Classes:
- Hateful
- Not Hateful
---
# DKbert-hatespeech-classification
Use this model to detect hatespeech in Danish. For details, guide and command line tool see [DK hate github](https://github.com/Guscode/DKbe... | 1,109 |
ganeshkharad/gk-hinglish-sentiment | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
language:
- hi-en
tags:
- sentiment
- multilingual
- hindi codemix
- hinglish
license: apache-2.0
datasets:
- sail
---
# Sentiment Classification for hinglish text: `gk-hinglish-sentiment`
## Model description
Trained small amount of reviews dataset
## Intended uses & limitations
I wanted something to work w... | 1,960 |
Jeevesh8/goog_bert_ft_cola-62 | null | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-63 | null | Entry not found | 15 |
howey/electra-base-sst2 | null | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-64 | null | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-65 | null | Entry not found | 15 |
Qiaozhen/fake-news-detector | [
"fake",
"real"
] | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-66 | null | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-67 | null | Entry not found | 15 |
Newtral/xlm-r-finetuned-toxic-political-tweets-es | null | ---
language: es
license: apache-2.0
---
# xlm-r-finetuned-toxic-political-tweets-es
This model is based on the pre-trained model [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) and was fine-tuned on a dataset of tweets from members of the [Spanish Congress of the Deputies](https://www.congreso.es/) annot... | 1,970 |
nateraw/bert-base-uncased-imdb | [
"NEGATIVE",
"POSITIVE"
] | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-68 | null | Entry not found | 15 |
pollner/dnabertregressor | [
"LABEL_0"
] | ---
tags:
- generated_from_trainer
model-index:
- name: dnabertregressor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dnabertregressor
This model was train... | 1,623 |
textattack/distilbert-base-uncased-rotten-tomatoes | null | ## TextAttack Model Card
This `distilbert-base-uncased` model was fine-tuned for sequence classificationusing TextAttack
and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned
for 3 epochs with a batch size of 128, a learning
rate of 1e-05, and a maximum sequence... | 680 |
allenai/multicite-multilabel-roberta-large | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6"
] | ---
language: en
tags:
- Roberta
license: mit
---
# MultiCite: Multi-label Citation Intent Classification with Roberta-large (NAACL 2022)
This model has been trained on the data available here: https://github.com/allenai/multicite. | 235 |
Jeevesh8/goog_bert_ft_cola-69 | null | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-70 | null | Entry not found | 15 |
matthewburke/korean_sentiment | null | ```
from transformers import pipeline
classifier = pipeline("text-classification", model="matthewburke/korean_sentiment")
custom_tweet = "영화 재밌다."
preds = classifier(custom_tweet, return_all_scores=True)
is_positive = preds[0][1]['score'] > 0.5
```
| 249 |
Jeevesh8/goog_bert_ft_cola-71 | null | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-72 | null | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-73 | null | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-74 | null | Entry not found | 15 |
emrecan/bert-base-turkish-cased-allnli_tr | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: mit
datasets:
- nli_tr
metrics:
- accuracy
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_label... | 7,064 |
moussaKam/frugalscore_medium_bert-base_mover-score | [
"LABEL_0"
] | # FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| ... | 2,592 |
clapika2010/adult_finetuned | null | Entry not found | 15 |
textattack/roberta-base-imdb | null | ## TextAttack Model Card
This `roberta-base` model was fine-tuned for sequence classification using TextAttack
and the imdb dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 64, a learning
rate of 3e-05, and a maximum sequence length of 128.
Since this was a classifi... | 607 |
sismetanin/sbert-ru-sentiment-rusentiment | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | ---
language:
- ru
tags:
- sentiment analysis
- Russian
---
## SBERT-Large-Base-ru-sentiment-RuSentiment
SBERT-Large-ru-sentiment-RuSentiment is a [SBERT-Large](https://huggingface.co/sberbank-ai/sbert_large_nlu_ru) model fine-tuned on [RuSentiment dataset](https://github.com/text-machine-lab/rusentiment) of general... | 6,350 |
Jeevesh8/goog_bert_ft_cola-75 | null | Entry not found | 15 |
HooshvareLab/bert-fa-base-uncased-sentiment-digikala | [
"no_idea",
"not_recommended",
"recommended"
] | ---
language: fa
license: apache-2.0
---
# ParsBERT (v2.0)
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the [ParsBERT](... | 2,674 |
howey/electra-small-mnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Rostlab/prot_bert_bfd_localization | [
"Cell.membrane",
"Cytoplasm",
"Endoplasmic.reticulum",
"Extracellular",
"Golgi.apparatus",
"Lysosome/Vacuole",
"Mitochondrion",
"Nucleus",
"Peroxisome",
"Plastid"
] | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-76 | null | Entry not found | 15 |
edumunozsala/beto_sentiment_analysis_es | [
"Negativo",
"Positivo"
] | ---
language: es
tags:
- sagemaker
- beto
- TextClassification
- SentimentAnalysis
license: apache-2.0
datasets:
- IMDbreviews_es
metrics:
- accuracy
model-index:
- name: beto_sentiment_analysis_es
results:
- task:
name: Sentiment Analysis
type: sentiment-analysis
dataset:
name: "IMDb Re... | 3,246 |
Jeevesh8/goog_bert_ft_cola-77 | null | Entry not found | 15 |
VictorSanh/roberta-base-finetuned-yelp-polarity | null | ---
language: en
datasets:
- yelp_polarity
---
# RoBERTa-base-finetuned-yelp-polarity
This is a [RoBERTa-base](https://huggingface.co/roberta-base) checkpoint fine-tuned on binary sentiment classifcation from [Yelp polarity](https://huggingface.co/nlp/viewer/?dataset=yelp_polarity).
It gets **98.08%** accuracy on the... | 744 |
Jeevesh8/goog_bert_ft_cola-78 | null | Entry not found | 15 |
ivanlau/language-detection-fine-tuned-on-xlm-roberta-base | [
"Arabic",
"Basque",
"Breton",
"Catalan",
"Chinese_China",
"Chinese_Hongkong",
"Chinese_Taiwan",
"Chuvash",
"Czech",
"Dhivehi",
"Dutch",
"English",
"Esperanto",
"Estonian",
"French",
"Frisian",
"Georgian",
"German",
"Greek",
"Hakha_Chin",
"Indonesian",
"Interlingua",
"Ital... | ---
license: mit
tags:
- generated_from_trainer
datasets:
- common_language
metrics:
- accuracy
model-index:
- name: language-detection-fine-tuned-on-xlm-roberta-base
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: common_language
type: common_language... | 1,748 |
Jeevesh8/goog_bert_ft_cola-79 | null | Entry not found | 15 |
MoritzLaurer/DeBERTa-v3-base-mnli-fever-docnli-ling-2c | [
"entailment",
"not_entailment"
] | ---
language:
- en
tags:
- text-classification
- zero-shot-classification
metrics:
- accuracy
widget:
- text: "I first thought that I liked the movie, but upon second thought it was actually disappointing. [SEP] The movie was good."
---
# DeBERTa-v3-base-mnli-fever-docnli-ling-2c
## Model description
This model was t... | 4,603 |
Jeevesh8/goog_bert_ft_cola-80 | null | Entry not found | 15 |
sampathkethineedi/industry-classification-api | [
"Advertising",
"Aerospace & Defense",
"Apparel Retail",
"Apparel, Accessories & Luxury Goods",
"Application Software",
"Asset Management & Custody Banks",
"Auto Parts & Equipment",
"Biotechnology",
"Building Products",
"Casinos & Gaming",
"Commodity Chemicals",
"Communications Equipment",
"C... | ---
language: "en"
thumbnail: "https://huggingface.co/sampathkethineedi"
widget:
- text: "3rd Rock Multimedia Limited is an India-based event management company. The Company conducts film promotions, international events, corporate events and cultural events. The Company's entertainment properties include 3rd Rock Fas... | 3,294 |
Jeevesh8/goog_bert_ft_cola-81 | null | Entry not found | 15 |
dhpollack/distilbert-dummy-sentiment | [
"negative",
"positive"
] | ---
language:
- "multilingual"
- "en"
tags:
- "sentiment-analysis"
- "testing"
- "unit tests"
---
# DistilBert Dummy Sentiment Model
## Purpose
This is a dummy model that can be used for testing the transformers `pipeline` with the task `sentiment-analysis`. It should always give random results (i.e. `{"label": "neg... | 1,155 |
saattrupdan/verdict-classifier | [
"factual",
"misinformation",
"other"
] | ---
license: mit
language:
- am
- ar
- hy
- eu
- bn
- bs
- bg
- my
- hr
- ca
- cs
- da
- nl
- en
- et
- fi
- fr
- ka
- de
- el
- gu
- ht
- iw
- hi
- hu
- is
- in
- it
- ja
- kn
- km
- ko
- lo
- lv
- lt
- ml
- mr
- ne
- no
- or
- pa
- ps
- fa
- pl
- pt
- ro
- ru
- sr
- zh
- sd
- si
- sk
- sl
- es
- sv
- tl
- ta
- te
- t... | 8,809 |
Jeevesh8/goog_bert_ft_cola-82 | null | Entry not found | 15 |
clampert/multilingual-sentiment-covid19 | null | ---
pipeline_tag: text-classification
language: multilingual
license: apache-2.0
tags:
- "sentiment-analysis"
- "multilingual"
widget:
- text: "I am very happy."
example_title: "English"
- text: "Heute bin ich schlecht drauf."
example_title: "Deutsch"
- text: "Quel cauchemard!"
example_title: "Francais"
- text: "... | 1,600 |
Jeevesh8/goog_bert_ft_cola-83 | null | Entry not found | 15 |
monologg/koelectra-small-finetuned-nsmc | [
"negative",
"positive"
] | Entry not found | 15 |
roberta-base-openai-detector | null | ---
language: en
license: mit
tags:
- exbert
datasets:
- bookcorpus
- wikipedia
---
# RoBERTa Base OpenAI Detector
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environment... | 9,170 |
Jeevesh8/goog_bert_ft_cola-84 | null | Entry not found | 15 |
ElKulako/cryptobert | [
"Bearish",
"Bullish",
"Neutral"
] | ---
datasets:
- ElKulako/stocktwits-crypto
language:
- en
tags:
- cryptocurrency
- crypto
- BERT
- sentiment classification
- NLP
- bitcoin
- ethereum
- shib
- social media
- sentiment analysis
- cryptocurrency sentiment analysis
---
# CryptoBERT
CryptoBERT is a pre-trained NLP model to analyse the language and sent... | 3,598 |
Jeevesh8/goog_bert_ft_cola-85 | null | Entry not found | 15 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.