modelId
stringlengths
6
107
label
list
readme
stringlengths
0
56.2k
readme_len
int64
0
56.2k
rohanrajpal/bert-base-codemixed-uncased-sentiment
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
--- language: - hi - en tags: - hi - en - codemix datasets: - SAIL 2017 --- # Model name ## Model description I took a bert-base-multilingual-cased model from huggingface and finetuned it on SAIL 2017 dataset. ## Intended uses & limitations #### How to use ```python # You can include sample code which will be f...
1,326
Jeevesh8/std_0pnt2_bert_ft_cola-53
null
Entry not found
15
mujeensung/roberta-base_mnli_bc
[ "contradiction", "entailment", "neutral" ]
--- language: - en license: mit tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: roberta-base_mnli_bc results: - task: name: Text Classification type: text-classification dataset: name: GLUE MNLI type: glue args: mnli metrics: - name:...
1,749
Jeevesh8/std_0pnt2_bert_ft_cola-54
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-55
null
Entry not found
15
razent/SciFive-base-Pubmed_PMC
null
--- language: - en tags: - token-classification - text-classification - question-answering - text2text-generation - text-generation datasets: - pubmed - pmc/open_access --- # SciFive Pubmed+PMC Base ## Introduction Paper: [SciFive: a text-to-text transformer model for biomedical literature](https://arxiv.org/abs...
1,404
has-abi/bert-finetuned-resumes-sections
[ "awards", "certificates", "contact/name/title", "education", "interests", "languages", "para", "professional_experiences", "projects", "skills", "soft_skills", "summary" ]
--- license: mit tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: bert-finetuned-resumes-sections results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this c...
2,374
jy46604790/Fake-News-Bert-Detect
null
--- license: apache-2.0 --- # Fake News Recognition ## Overview This model is trained by over 40,000 news from different medias based on the 'roberta-base'. It can give result by simply entering the text of the news less than 500 words(the excess will be truncated automatically). LABEL_0: Fake news LABEL_1: Real...
2,136
Jeevesh8/std_0pnt2_bert_ft_cola-56
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-57
null
Entry not found
15
sismetanin/rubert-toxic-pikabu-2ch
null
--- language: - ru tags: - toxic comments classification --- ## RuBERT-Toxic RuBERT-Toxic is a [RuBERT](https://huggingface.co/DeepPavlov/rubert-base-cased) model fine-tuned on [Kaggle Russian Language Toxic Comments Dataset](https://www.kaggle.com/blackmoon/russian-language-toxic-comments). You can find a detailed ...
2,179
NDugar/ZSD-microsoft-v2xxlmnli
[ "CONTRADICTION", "NEUTRAL", "ENTAILMENT" ]
--- language: en tags: - deberta-v1 - deberta-mnli tasks: mnli thumbnail: https://huggingface.co/front/thumbnails/microsoft.png license: mit pipeline_tag: zero-shot-classification --- ## DeBERTa: Decoding-enhanced BERT with Disentangled Attention [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERT...
3,876
Raychanan/bert-base-chinese-FineTuned-Binary-Best
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-58
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-59
null
Entry not found
15
DaNLP/da-bert-tone-subjective-objective
[ "objective", "subjective" ]
--- language: - da tags: - bert - pytorch - subjectivity - objectivity license: cc-by-sa-4.0 datasets: - Twitter Sentiment - Europarl Sentiment widget: - text: Jeg tror alligvel, det bliver godt metrics: - f1 --- # Danish BERT Tone for the detection of subjectivity/objectivity The BERT Tone model detects whether a te...
1,224
barissayil/bert-sentiment-analysis-sst
null
Entry not found
15
cardiffnlp/twitter-roberta-base-emoji
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_11", "LABEL_12", "LABEL_13", "LABEL_14", "LABEL_15", "LABEL_16", "LABEL_17", "LABEL_18", "LABEL_19", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5", "LABEL_6", "LABEL_7", "LABEL_8", "LABEL_9" ]
# Twitter-roBERTa-base for Emoji prediction This is a roBERTa-base model trained on ~58M tweets and finetuned for emoji prediction with the TweetEval benchmark. - Paper: [_TweetEval_ benchmark (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf). - Git Repo: [Tweeteval official repository](https://github....
2,625
Jeevesh8/std_0pnt2_bert_ft_cola-60
null
Entry not found
15
DaNLP/da-bert-hatespeech-detection
[ "not offensive", "offensive" ]
--- language: - da tags: - bert - pytorch - hatespeech license: cc-by-sa-4.0 datasets: - social media metrics: - f1 widget: - text: "Senile gamle idiot" --- # Danish BERT for hate speech (offensive language) detection The BERT HateSpeech model detects whether a Danish text is offensive or not. It is based on the pre...
1,049
Jeevesh8/std_0pnt2_bert_ft_cola-61
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-63
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-64
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-62
null
Entry not found
15
abhishek/autonlp-japanese-sentiment-59363
[ "negative", "positive" ]
--- tags: autonlp language: ja widget: - text: "🤗AutoNLPが大好きです" datasets: - abhishek/autonlp-data-japanese-sentiment --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 59363 ## Validation Metrics - Loss: 0.12651239335536957 - Accuracy: 0.9532079853817648 - Precision: 0.972968827882...
1,094
ShreyaR/finetuned-roberta-depression
null
--- license: mit tags: - generated_from_trainer widget: - text: "I feel so low and numb, don't feel like doing anything. Just passing my days" - text: "Sleep is my greatest and most comforting escape whenever I wake up these days. The literal very first emotion I feel is just misery and reminding myself of all my probl...
1,961
Jeevesh8/std_0pnt2_bert_ft_cola-66
null
Entry not found
15
gchhablani/bert-base-cased-finetuned-mrpc
[ "equivalent", "not_equivalent" ]
--- language: - en license: apache-2.0 tags: - generated_from_trainer - fnet-bert-base-comparison datasets: - glue metrics: - accuracy - f1 model-index: - name: bert-base-cased-finetuned-mrpc results: - task: name: Text Classification type: text-classification dataset: name: GLUE MRPC ty...
3,053
cross-encoder/ms-marco-TinyBERT-L-4
[ "LABEL_0" ]
--- license: apache-2.0 --- # Cross-Encoder for MS Marco This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch)....
3,233
lordtt13/emo-mobilebert
[ "angry", "happy", "others", "sad" ]
--- language: en datasets: - emo --- ## Emo-MobileBERT: a thin version of BERT LARGE, trained on the EmoContext Dataset from scratch ### Details of MobileBERT The **MobileBERT** model was presented in [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by *Zhiqin...
2,933
textattack/roberta-base-RTE
null
## TextAttack Model Card This `roberta-base` model was fine-tuned for sequence classification using TextAttack and the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 2e-05, and a maximum sequence length of 128. Since this was a classifi...
618
daveni/twitter-xlm-roberta-emotion-es
[ "anger", "disgust", "fear", "joy", "others", "sadness", "surprise" ]
--- language: - es tags: - Emotion Analysis --- **Note**: This model & model card are based on the [finetuned XLM-T for Sentiment Analysis](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment) # twitter-XLM-roBERTa-base for Emotion Analysis This is a XLM-roBERTa-base model trained on ~198M tweets ...
3,814
manishiitg/distilbert-resume-parts-classify
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_11", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5", "LABEL_6", "LABEL_7", "LABEL_8", "LABEL_9" ]
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-65
null
Entry not found
15
cross-encoder/quora-roberta-base
[ "LABEL_0" ]
--- license: apache-2.0 --- # Cross-Encoder for Quora Duplicate Questions Detection This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data This model was trained on the [Quora Duplicate Questi...
1,070
Jeevesh8/std_0pnt2_bert_ft_cola-68
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-67
null
Entry not found
15
symanto/xlm-roberta-base-snli-mnli-anli-xnli
[ "ENTAILMENT", "NEUTRAL", "CONTRADICTION" ]
--- language: - ar - bg - de - el - en - es - fr - ru - th - tr - ur - vn - zh datasets: - SNLI - MNLI - ANLI - XNLI tags: - zero-shot-classification --- A cross-attention NLI model trained for zero-shot and few-shot text classification. The base model is [xlm-roberta-base](https://hugging...
1,690
Jeevesh8/std_0pnt2_bert_ft_cola-70
null
Entry not found
15
bhadresh-savani/albert-base-v2-emotion
[ "anger", "fear", "joy", "love", "sadness", "surprise" ]
--- language: - en thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4 tags: - text-classification - emotion - pytorch license: apache-2.0 datasets: - emotion metrics: - Accuracy, F1 Score --- # Albert-base-v2-emotion ## Model description: [Albert](https:/...
2,634
jason9693/SoongsilBERT-base-beep
[ "hate", "none", "offensive" ]
--- language: ko widget: - text: "응 어쩔티비~" datasets: - kor_hate --- # Finetuning ## Result ### Base Model | | Size | **NSMC**<br/>(acc) | **Naver NER**<br/>(F1) | **PAWS**<br/>(acc) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | **Question Pair**<br/>(acc) | **KorQuaD (Dev)**<br/>(EM/F1...
3,679
Jeevesh8/std_0pnt2_bert_ft_cola-69
null
Entry not found
15
nateraw/bert-base-uncased-ag-news
[ "Business", "Sci/Tech", "Sports", "World" ]
--- language: - en thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4 tags: - text-classification - ag_news - pytorch license: mit datasets: - ag_news metrics: - accuracy --- # bert-base-uncased-ag-news ## Model description `bert-base-uncased` finetuned ...
787
Jeevesh8/std_0pnt2_bert_ft_cola-72
null
Entry not found
15
IlyaGusev/xlm_roberta_large_headline_cause_full
[ "bad", "same", "rel", "left_right_cause", "right_left_cause", "left_right_refute", "right_left_refute" ]
--- language: - ru - en tags: - xlm-roberta-large datasets: - IlyaGusev/headline_cause license: apache-2.0 widget: - text: "Песков опроверг свой перевод на удаленку</s>Дмитрий Песков перешел на удаленку" --- # XLM-RoBERTa HeadlineCause Full ## Model description This model was trained to predict the presence of caus...
3,246
poom-sci/WangchanBERTa-finetuned-sentiment
[ "neg", "neu", "pos" ]
--- language: - th tags: - sentiment-analysis license: apache-2.0 datasets: - wongnai_reviews - wisesight_sentiment - generated_reviews_enth widget: - text: "โอโห้ ช่องนี้เปิดโลกเรามากเลยค่ะ คือตอนช่วงหาคำตอบเรานี่อึ้งไปเลย ดูจีเนียสมากๆๆ" example_title: "Positive" - text: "เริ่มจากชายเน็ตคนหนึ่งเปิดประเด็นว่าไปพบเจ้...
551
HooshvareLab/bert-fa-base-uncased-sentiment-snappfood
[ "HAPPY", "SAD" ]
--- language: fa license: apache-2.0 --- # ParsBERT (v2.0) A Transformer-based Model for Persian Language Understanding We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes! Please follow the [ParsBERT](...
2,650
cointegrated/rubert-tiny-bilingual-nli
[ "entailment", "not_entailment" ]
--- language: ru pipeline_tag: zero-shot-classification tags: - rubert - russian - nli - rte - zero-shot-classification widget: - text: "Сервис отстойный, кормили невкусно" candidate_labels: "Мне понравилось, Мне не понравилось" hypothesis_template: "{}." --- # RuBERT-tiny for NLI (natural language inference) This...
633
razent/SciFive-large-Pubmed_PMC
null
--- language: - en tags: - token-classification - text-classification - question-answering - text2text-generation - text-generation datasets: - pubmed - pmc/open_access --- # SciFive Pubmed+PMC Large ## Introduction Paper: [SciFive: a text-to-text transformer model for biomedical literature](https://arxiv.org/a...
1,408
twigs/cwi-regressor
[ "LABEL_0" ]
Entry not found
15
aychang/bert-base-cased-trec-coarse
[ "ABBR", "DESC", "ENTY", "HUM", "LOC", "NUM" ]
--- language: - en thumbnail: tags: - text-classification license: mit datasets: - trec metrics: --- # bert-base-cased trained on TREC 6-class task ## Model description A simple base BERT model trained on the "trec" dataset. ## Intended uses & limitations #### How to use ##### Transformers ```python # Load mod...
2,246
svalabs/gbert-large-zeroshot-nli
[ "contradiction", "entailment", "neutral" ]
--- language: German tags: - text-classification - pytorch - nli - de pipeline_tag: zero-shot-classification widget: - text: "Ich habe ein Problem mit meinem Iphone das so schnell wie möglich gelöst werden muss." candidate_labels: "Computer, Handy, Tablet, dringend, nicht dringend" hypothesis_template: ...
3,670
Jeevesh8/std_0pnt2_bert_ft_cola-71
null
Entry not found
15
tomh/toxigen_roberta
null
--- language: - en tags: - text-classification --- Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, Ece Kamar. This model comes from the paper [ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection](https://arxiv.org/abs/2203.09509) and can...
904
lvwerra/bert-imdb
null
# BERT-IMDB ## What is it? BERT (`bert-large-cased`) trained for sentiment classification on the [IMDB dataset](https://www.kaggle.com/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews). ## Training setting The model was trained on 80% of the IMDB dataset for sentiment classification for three epochs with a learning...
619
Jeevesh8/std_0pnt2_bert_ft_cola-73
null
Entry not found
15
MoritzLaurer/DeBERTa-v3-xsmall-mnli-fever-anli-ling-binary
[ "entailment", "not_entailment" ]
--- language: - en tags: - text-classification - zero-shot-classification metrics: - accuracy datasets: - multi_nli - anli - fever - lingnli pipeline_tag: zero-shot-classification --- # DeBERTa-v3-xsmall-mnli-fever-anli-ling-binary ## Model description This model was trained on 782 357 hypothesis-premise pairs from 4...
4,328
abhishek/autonlp-bbc-news-classification-37229289
[ "business", "entertainment", "politics", "sport", "tech" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - abhishek/autonlp-data-bbc-news-classification co2_eq_emissions: 5.448567309047846 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 37229289 - CO2 Emissions (in grams): 5.448567309047846 ## Validatio...
1,425
Jeevesh8/std_0pnt2_bert_ft_cola-74
null
Entry not found
15
IDEA-CCNL/Erlangshen-MegatronBert-1.3B-Sentiment
null
--- language: - zh license: apache-2.0 tags: - bert - NLU - Sentiment inference: true widget: - text: "今天心情不好" --- # Erlangshen-MegatronBert-1.3B-Semtiment, model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM). We collect 8 sentiment datasets in the Chinese domain for fin...
1,582
boychaboy/SNLI_distilbert-base-cased
[ "contradiction", "entailment", "neutral" ]
Entry not found
15
Recognai/zeroshot_selectra_medium
[ "contradiction", "neutral", "entailment" ]
--- language: es tags: - zero-shot-classification - nli - pytorch datasets: - xnli pipeline_tag: zero-shot-classification license: apache-2.0 widget: - text: "El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo" candidate_labels: "cultura, sociedad, economia, salud, deportes" --- # Z...
4,337
Jeevesh8/std_0pnt2_bert_ft_cola-75
null
Entry not found
15
cross-encoder/nli-deberta-base
[ "contradiction", "entailment", "neutral" ]
--- language: en pipeline_tag: zero-shot-classification tags: - deberta-base-base datasets: - multi_nli - snli metrics: - accuracy license: apache-2.0 --- # Cross-Encoder for Natural Language Inference This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples...
2,564
microsoft/DialogRPT-width
null
# Demo Please try this [➤➤➤ Colab Notebook Demo (click me!)](https://colab.research.google.com/drive/1cAtfkbhqsRsT59y3imjR1APw3MHDMkuV?usp=sharing) | Context | Response | `width` score | | :------ | :------- | :------------: | | I love NLP! | Can anyone recommend a nice review paper? | 0.701 | | I love NLP! | ...
2,607
Maklygin/mBert-relation-extraction-FT
null
Entry not found
15
mmillet/distilrubert-tiny-2ndfinetune-epru
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3" ]
--- tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: distilrubert-tiny-2ndfinetune-epru results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then re...
2,288
ipuneetrathore/bert-base-cased-finetuned-finBERT
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
## FinBERT Code for importing and using this model is available [here](https://github.com/ipuneetrathore/BERT_models)
119
Jeevesh8/std_0pnt2_bert_ft_cola-76
null
Entry not found
15
DTAI-KULeuven/robbert-v2-dutch-sentiment
[ "Negative", "Positive" ]
--- language: nl license: mit datasets: - dbrd model-index: - name: robbert-v2-dutch-sentiment results: - task: type: text-classification name: Text Classification dataset: name: dbrd type: sentiment-analysis split: test metrics: - name: Accuracy type: accuracy ...
4,294
Jeevesh8/std_0pnt2_bert_ft_cola-77
null
Entry not found
15
cardiffnlp/bertweet-base-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3" ]
0
valhalla/bart-large-sst2
[ "NEGATIVE", "POSITIVE" ]
Entry not found
15
dmis-lab/biobert-large-cased-v1.1-mnli
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
textattack/roberta-base-MRPC
null
## TextAttack Model Card This `roberta-base` model was fine-tuned for sequence classification using TextAttack and the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 3e-05, and a maximum sequence length of 256. Since this was a classifi...
618
mrm8488/bert-mini-finetuned-age_news-classification
[ "World", "Sports", "Business", "Sci/Tech" ]
--- language: en tags: - news - classification - mini datasets: - ag_news widget: - text: "Israel withdraws from Gaza camp Israel withdraws from Khan Younis refugee camp in the Gaza Strip, after a four-day operation that left 11 dead." --- # BERT-Mini fine-tuned on age_news dataset for news classification Test set ac...
331
Jeevesh8/std_0pnt2_bert_ft_cola-78
null
Entry not found
15
Mithil/Bert
null
--- license: afl-3.0 ---
25
IMSyPP/hate_speech_en
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3" ]
--- widget: - text: "My name is Mark and I live in London. I am a postgraduate student at Queen Mary University." language: - en license: mit --- # Hate Speech Classifier for Social Media Content in English Language A monolingual model for hate speech classification of social media content in English language. Th...
802
snunlp/KR-FinBert-SC
[ "negative", "neutral", "positive" ]
--- language: - ko --- # KR-FinBert & KR-FinBert-SC Much progress has been made in the NLP (Natural Language Processing) field, with numerous studies showing that domain adaptation using small-scale corpus and fine-tuning with labeled data is effective for overall performance improvement. we proposed KR-FinBert for...
2,669
Jeevesh8/std_0pnt2_bert_ft_cola-79
null
Entry not found
15
IDEA-CCNL/Erlangshen-MegatronBert-1.3B-NLI
null
--- language: - zh license: apache-2.0 tags: - bert - NLU - NLI inference: true widget: - text: "今天心情不好[SEP]今天很开心" --- # Erlangshen-MegatronBert-1.3B-NLI, model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM). We collect 4 NLI(Natural Language Inference) datasets in the Chi...
1,596
cross-encoder/nli-MiniLM2-L6-H768
[ "contradiction", "entailment", "neutral" ]
--- language: en pipeline_tag: zero-shot-classification license: apache-2.0 tags: - MiniLMv2 datasets: - multi_nli - snli metrics: - accuracy --- # Cross-Encoder for Natural Language Inference This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applicat...
2,567
TransQuest/monotransquest-da-en_zh-wiki
[ "LABEL_0" ]
--- language: en-zh tags: - Quality Estimation - monotransquest - DA license: apache-2.0 --- # TransQuest: Translation Quality Estimation with Cross-lingual Transformers The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE t...
5,401
eleldar/language-detection
[ "ar", "bg", "de", "el", "en", "es", "fr", "hi", "it", "ja", "nl", "pl", "pt", "ru", "sw", "th", "tr", "ur", "vi", "zh" ]
--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: xlm-roberta-base-language-detection results: [] --- # Clone from [https://huggingface.co/papluca/xlm-roberta-base-language-detection](xlm-roberta-base-language-detection) This model is a fine-tuned version of [xlm-roberta-...
5,748
microsoft/DialogRPT-depth
null
# Demo Please try this [➤➤➤ Colab Notebook Demo (click me!)](https://colab.research.google.com/drive/1cAtfkbhqsRsT59y3imjR1APw3MHDMkuV?usp=sharing) | Context | Response | `depth` score | | :------ | :------- | :------------: | | I love NLP! | Can anyone recommend a nice review paper? | 0.724 | | I love NLP! | ...
2,634
dhtocks/Topic-Classification
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3" ]
Entry not found
15
huggingface/distilbert-base-uncased-finetuned-mnli
[ "contradiction", "entailment", "neutral" ]
Entry not found
15
Gunulhona/tbbcmodel
[ "Agitation", "Changes in Appetite", "Changes in Sleeping Pattern", "Concentration Difficultiy", "Crying", "Gulity Feelings", "Indecisivness", "Irritability", "Loss of Energy", "Loss of Interest", "Loss of Interest in Sex", "Loss of Pleasure", "Non BDI", "Past Failure", "Pessimism", "Pu...
Entry not found
15
Elron/bleurt-base-512
[ "LABEL_0" ]
\n## BLEURT Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research. The code for model conversion was originated from [this notebook](http...
999
Gunulhona/tbnymodel
[ "Negative", "Non Related", "Positive" ]
Entry not found
15
Gunulhona/tbecmodel
[ "ANGER", "DISGUST", "FEAR", "HAPPINESS", "NEUTRALITY", "SADNESS", "SURPRISED" ]
Entry not found
15
valurank/distilroberta-news-small
[ "bad", "good", "medium" ]
--- license: other language: en datasets: - valurank/news-small --- # DistilROBERTA fine-tuned for news classification This model is based on [distilroberta-base](https://huggingface.co/distilroberta-base) pretrained weights, with a classification head fine-tuned to classify news articles into 3 categories (bad, medi...
618
Seethal/sentiment_analysis_generic_dataset
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
## BERT base model (uncased) Pretrained model on English language using a masked language modeling (MLM) objective. This model is uncased: it does not make a difference between english and English. ## Model description BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashi...
1,988
textattack/bert-base-uncased-snli
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
razent/SciFive-base-Pubmed
null
--- language: - en tags: - token-classification - text-classification - question-answering - text2text-generation - text-generation datasets: - pubmed --- # SciFive Pubmed Base ## Introduction Paper: [SciFive: a text-to-text transformer model for biomedical literature](https://arxiv.org/abs/2106.03598) Authors...
1,375
textattack/albert-base-v2-MRPC
null
## TextAttack Model Card This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack and the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 32, a learning rate of 2e-05, and a maximum sequence length of 128. Since this was a classi...
620
HooshvareLab/bert-fa-base-uncased-clf-persiannews
[ "اجتماعی", "اقتصادی", "بین الملل", "سیاسی", "علمی فناوری", "فرهنگی هنری", "ورزشی", "پزشکی" ]
--- language: fa license: apache-2.0 --- # ParsBERT (v2.0) A Transformer-based Model for Persian Language Understanding We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes! Please follow the [ParsBERT]...
2,767
cardiffnlp/bertweet-base-sentiment
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
0
assemblyai/distilbert-base-uncased-sst2
null
# DistilBERT-Base-Uncased for Sentiment Analysis This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) originally released in ["DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter"](https://arxiv.org/abs/1910.01108) and trained on the [...
1,779