modelId stringlengths 6 107 | label list | readme stringlengths 0 56.2k | readme_len int64 0 56.2k |
|---|---|---|---|
cross-encoder/nli-deberta-v3-xsmall | [
"contradiction",
"entailment",
"neutral"
] | ---
language: en
pipeline_tag: zero-shot-classification
tags:
- microsoft/deberta-v3-xsmall
datasets:
- multi_nli
- snli
metrics:
- accuracy
license: apache-2.0
---
# Cross-Encoder for Natural Language Inference
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.ne... | 2,791 |
StevenLimcorn/indonesian-roberta-base-emotion-classifier | [
"anger",
"fear",
"happy",
"love",
"sadness"
] | ---
language: id
tags:
- roberta
license: mit
datasets:
- indonlu
widget:
- text: "Hal-hal baik akan datang."
---
# Indo RoBERTa Emotion Classifier
Indo RoBERTa Emotion Classifier is emotion classifier based on [Indo-roberta](https://huggingface.co/flax-community/indonesian-roberta-base) model. It was trained on the ... | 2,326 |
alisawuffles/roberta-large-wanli | [
"contradiction",
"entailment",
"neutral"
] | ---
widget:
- text: "I almost forgot to eat lunch.</s></s>I didn't forget to eat lunch."
- text: "I almost forgot to eat lunch.</s></s>I forgot to eat lunch."
- text: "I ate lunch.</s></s>I almost forgot to eat lunch."
---
This is an off-the-shelf roberta-large model finetuned on WANLI, the Worker-AI Collaborative ... | 1,436 |
ChrisUPM/BioBERT_Re_trained | null | PyTorch trained model on GAD dataset for relation classification, using BioBert weights. | 88 |
Jeevesh8/goog_bert_ft_cola-86 | null | Entry not found | 15 |
textattack/albert-base-v2-CoLA | null | ## TextAttack Model Cardand the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 32, a learning
rate of 3e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score t... | 530 |
LiYuan/amazon-query-product-ranking | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-mnli-amazon-query-shopping
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and ... | 3,784 |
Jeevesh8/goog_bert_ft_cola-87 | null | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-88 | null | Entry not found | 15 |
jb2k/bert-base-multilingual-cased-language-detection | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_28",
"LABEL_29",... | # bert-base-multilingual-cased-language-detection
A model for language detection with support for 45 languages
## Model description
This model was created by fine-tuning
[bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the [common language](https://huggingface.co/datasets/common_l... | 1,050 |
oliverqq/scibert-uncased-topics | [
"Artificial intelligence",
"Computer science",
"Economics",
"Engineering",
"Mathematics",
"Medicine",
"Psychology",
"Sociology"
] | Entry not found | 15 |
Cameron/BERT-rtgender-opgender-annotations | null | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-89 | null | Entry not found | 15 |
Intel/bert-base-uncased-mrpc | [
"equivalent",
"not_equivalent"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: bert-base-uncased-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metric... | 1,312 |
Jeevesh8/goog_bert_ft_cola-90 | null | Entry not found | 15 |
ethanyt/guwen-sent | [
"Neg",
"ImpNeg",
"Nerual",
"ImpPos",
"Pos"
] | ---
language:
- "zh"
thumbnail: "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png"
tags:
- "chinese"
- "classical chinese"
- "literary chinese"
- "ancient chinese"
- "bert"
- "pytorch"
- "sentiment classificatio"
license: "apache-2.0"
pipeline_tag: "text-classificatio... | 1,326 |
Jeevesh8/goog_bert_ft_cola-91 | null | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-92 | null | Entry not found | 15 |
mrm8488/distilroberta-finetuned-age_news-classification | [
"World",
"Sports",
"Business",
"Sci/Tech"
] | ---
language: en
tags:
- news
- classification
datasets:
- ag_news
widget:
- text: "Venezuela Prepares for Chavez Recall Vote Supporters and rivals warn of possible fraud; government says Chavez's defeat could produce turmoil in world oil market."
---
# distilroberta-base fine-tuned on age_news dataset for news classi... | 352 |
m-newhauser/distilbert-political-tweets | [
"Democrat",
"Republican"
] | ---
language:
- en
license: lgpl-3.0
library_name: transformers
tags:
- text-classification
- transformers
- pytorch
- generated_from_keras_callback
metrics:
- accuracy
- f1
datasets:
- m-newhauser/senator-tweets
widget:
- text: "This pandemic has shown us clearly the vulgarity of our healthcare system. Highest costs i... | 1,771 |
Jeevesh8/goog_bert_ft_cola-93 | null | Entry not found | 15 |
SkolkovoInstitute/xlmr_formality_classifier | [
"formal",
"informal"
] | ---
language:
- en
- fr
- it
- pt
tags:
- formal or informal classification
licenses:
- cc-by-nc-sa
---
XLMRoberta-based classifier trained on XFORMAL.
all
| | precision | recall | f1-score | support |
|--------------|-----------|----------|----------|---------|
| 0 | 0.744912 | 0.92779... | 3,099 |
Jeevesh8/goog_bert_ft_cola-94 | null | Entry not found | 15 |
yseop/distilbert-base-financial-relation-extraction | [
"are",
"has",
"is",
"is in",
"x"
] | ---
inference: true
pipeline_tag: text-classification
tags:
- feature-extraction
- text-classification
library: pytorch
---
<div style="clear: both;">
<div style="float: left; margin-right 1em;">
<h1><strong>FReE (Financial Relation Extraction)</strong></h1>
</div>
<div>
<h2><img src="https://pbs.twimg.c... | 2,655 |
Jeevesh8/goog_bert_ft_cola-95 | null | Entry not found | 15 |
Jeevesh8/goog_bert_ft_cola-96 | null | Entry not found | 15 |
ans/vaccinating-covid-tweets | [
"false",
"misleading",
"true"
] | ---
language: en
license: apache-2.0
datasets:
- tweets
widget:
- text: "Vaccines to prevent SARS-CoV-2 infection are considered the most promising approach for curbing the pandemic."
---
# Disclaimer: This page is under maintenance. Please DO NOT refer to the information on this page to make any decision yet.
# Vacc... | 4,394 |
Jeevesh8/goog_bert_ft_cola-97 | null | Entry not found | 15 |
textattack/distilbert-base-cased-QQP | null | Entry not found | 15 |
Kayvane/distilvert-complaints-subproduct | [
"",
"(CD) Certificate of deposit",
"Auto",
"Auto debt",
"CD (Certificate of Deposit)",
"Cashing a check without an account",
"Check cashing",
"Check cashing service",
"Checking account",
"Conventional adjustable mortgage (ARM)",
"Conventional fixed mortgage",
"Conventional home mortgage",
"C... | Entry not found | 15 |
moussaKam/frugalscore_medium_bert-base_bert-score | [
"LABEL_0"
] | # FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| ... | 2,592 |
Jeevesh8/goog_bert_ft_cola-98 | null | Entry not found | 15 |
castorini/monobert-large-msmarco-finetune-only | null | # Model Description
This checkpoint is a direct conversion of [BERT_Large_trained_on_MSMARCO.zip](https://drive.google.com/open?id=1crlASTMlsihALlkabAQP6JTYIZwC1Wm8) from the original [repo](https://github.com/nyu-dl/dl4marco-bert/).
The corresponding model class is BertForSequenceClassification, and its purpose is for... | 455 |
Jeevesh8/goog_bert_ft_cola-99 | null | Entry not found | 15 |
CAMeL-Lab/bert-base-arabic-camelbert-msa-did-madar-twitter5 | [
"Algeria",
"Bahrain",
"Djibouti",
"Egypt",
"Iraq",
"Jordan",
"Kuwait",
"Lebanon",
"Libya",
"Mauritania",
"Morocco",
"Oman",
"Palestine",
"Qatar",
"Saudi_Arabia",
"Somalia",
"Sudan",
"Syria",
"Tunisia",
"United_Arab_Emirates",
"Yemen"
] | ---
language:
- ar
license: apache-2.0
widget:
- text: "عامل ايه ؟"
---
# CAMeLBERT-MSA DID MADAR Twitter-5 Model
## Model description
**CAMeLBERT-MSA DID MADAR Twitter-5 Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT-MSA](https://huggingface.co/CAMeL-Lab/bert-base-arabic... | 2,968 |
MutazYoune/Absa_AspectSentiment_hotels | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
cardiffnlp/twitter-roberta-base-stance-abortion | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | 0 | |
mrm8488/deberta-v3-base-goemotions | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_3",
"LABEL_4",
... | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: deberta-v3-base-goemotions
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# d... | 1,583 |
jakelever/coronabert | [
"Clinical Reports",
"Comment/Editorial",
"Communication",
"Contact Tracing",
"Diagnostics",
"Drug Targets",
"Education",
"Effect on Medical Specialties",
"Forecasting & Modelling",
"Health Policy",
"Healthcare Workers",
"Imaging",
"Immunology",
"Inequality",
"Infection Reports",
"Long ... | ---
language: en
thumbnail: https://coronacentral.ai/logo-with-name.png?1
tags:
- coronavirus
- covid
- bionlp
datasets:
- cord19
- pubmed
license: mit
widget:
- text: "Pre-existing T-cell immunity to SARS-CoV-2 in unexposed healthy controls in Ecuador, as detected with a COVID-19 Interferon-Gamma Release Assay."
- tex... | 3,313 |
TransQuest/monotransquest-da-en_any | [
"LABEL_0"
] | ---
language: en-multilingual
tags:
- Quality Estimation
- monotransquest
- DA
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-acc... | 5,407 |
ayameRushia/bert-base-indonesian-1.5G-sentiment-analysis-smsa | [
"Positive",
"Neutral",
"Negative"
] | ---
license: mit
tags:
- generated_from_trainer
datasets:
- indonlu
metrics:
- accuracy
model-index:
- name: bert-base-indonesian-1.5G-finetuned-sentiment-analysis-smsa
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: indonlu
type: indonlu
args: s... | 2,303 |
SynamicTechnologies/CYBERT | null | ## CYBERT
BERT model dedicated to the domain of cyber security. The model has been trained on a corpus of high-quality cyber security and computer science text and is unlikely to work outside this domain.
##Model architecture
The model architecture used is original Roberta and tokenizer to train the corpus is Byte ... | 388 |
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi | [
"Algeria",
"Bahrain",
"Djibouti",
"Egypt",
"Iraq",
"Jordan",
"Kuwait",
"Lebanon",
"Libya",
"Mauritania",
"Morocco",
"Oman",
"Palestine",
"Qatar",
"Saudi_Arabia",
"Somalia",
"Sudan",
"Syria",
"Tunisia",
"United_Arab_Emirates",
"Yemen"
] | ---
language:
- ar
license: apache-2.0
widget:
- text: "عامل ايه ؟"
---
# CAMeLBERT-Mix DID NADI Model
## Model description
**CAMeLBERT-Mix DID NADI Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model... | 2,927 |
Elron/bleurt-tiny-512 | [
"LABEL_0"
] | \n## BLEURT
Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by
Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.
The code for model conversion was originated from [this notebook](http... | 1,001 |
larskjeldgaard/senda | [
"negativ",
"neutral",
"positiv"
] | ---
language: da
tags:
- danish
- bert
- sentiment
- polarity
license: cc-by-4.0
widget:
- text: "Sikke en dejlig dag det er i dag"
---
# Danish BERT fine-tuned for Sentiment Analysis (Polarity)
This model detects polarity ('positive', 'neutral', 'negative') of danish texts.
It is trained and tested on Tweets annotate... | 948 |
tae898/emoberta-base | [
"neutral",
"joy",
"surprise",
"anger",
"sadness",
"disgust",
"fear"
] | ---
language: en
tags:
- emoberta
- roberta
license: mit
datasets:
- MELD
- IEMOCAP
---
Check https://github.com/tae898/erc for the details
[Watch a demo video!](https://youtu.be/qbr7fNd6J28)
# Emotion Recognition in Coversation (ERC)
[ by
Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.
The code for model conversion was originated from [this notebook](https:... | 998 |
howey/roberta-large-sst2 | null | Entry not found | 15 |
madhurjindal/autonlp-Gibberish-Detector-492513457 | [
"clean",
"mild gibberish",
"noise",
"word salad"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- madhurjindal/autonlp-data-Gibberish-Detector
co2_eq_emissions: 5.527544460835904
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 492513457
- CO2 Emissions (in grams): 5.527544460835904
## Validatio... | 1,425 |
helliun/polhol | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
wukevin/tcr-bert | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_28",
"LABEL_29",... | # TCR transformer model
See our full [codebase](https://github.com/wukevin/tcr-bert) and our [preprint](https://www.biorxiv.org/content/10.1101/2021.11.18.469186v1) for more information.
This model is on:
- Masked language modeling (masked amino acid or MAA modeling)
- Classification across antigen labels from PIRD
... | 600 |
Narrativaai/fake-news-detection-spanish | [
"REAL",
"FAKE"
] | ---
language: es
tags:
- generated_from_trainer
- fake
- news
- competition
datasets:
- fakedes
widget:
- text: 'La palabra "haiga", aceptada por la RAE [SEP] La palabra "haiga", aceptada por la RAE La Real Academia de la Lengua (RAE), ha aceptado el uso de "HAIGA", para su utilización en las tres personas del singul... | 7,116 |
TehranNLP-org/bert-base-uncased-cls-sst2 | null | Entry not found | 15 |
savasy/bert-turkish-text-classification | [
"world",
"economy",
"culture",
"health",
"politics",
"sport",
"technology"
] | ---
language: tr
---
# Turkish Text Classification
This model is a fine-tune model of https://github.com/stefan-it/turkish-bert by using text classification data where there are 7 categories as follows
```
code_to_label={
'LABEL_0': 'dunya ',
'LABEL_1': 'ekonomi ',
'LABEL_2': 'kultur ',
'LABEL_3': 'saglik ',
'L... | 2,564 |
GeniusVoice/tinybertje-msmarco-finetuned | [
"LABEL_0"
] | Entry not found | 15 |
JP040/bert-german-sentiment-twitter | [
"negative",
"neutral",
"positive"
] | Entry not found | 15 |
Mithil/RobertaAmazonTrained | null | ---
license: other
---
| 23 |
yangheng/deberta-v3-base-absa-v1.1 | [
"Negative",
"Neutral",
"Positive"
] |
---
language:
- en
tags:
- aspect-based-sentiment-analysis
- PyABSA
license: mit
datasets:
- laptop14
- restaurant14
- restaurant16
- ACL-Twitter
- MAMS
- Television
- TShirt
- Yelp
metrics:
- accuracy
- macro-f1
widget:
- text: "[CLS] when tables opened up, the manager sat another party befor... | 3,142 |
cointegrated/rubert-base-cased-nli-twoway | [
"entailment",
"not_entailment"
] | ---
language: ru
pipeline_tag: zero-shot-classification
tags:
- rubert
- russian
- nli
- rte
- zero-shot-classification
widget:
- text: "Я хочу поехать в Австралию"
candidate_labels: "спорт,путешествия,музыка,кино,книги,наука,политика"
hypothesis_template: "Тема текста - {}."
---
# RuBERT for NLI (natural language... | 652 |
dtomas/roberta-base-bne-irony | null | ---
language:
- es
tags:
- irony
- sarcasm
- spanish
widget:
- text: "¡Cómo disfruto peleándome con los Transformers!"
example_title: "Ironic"
- text: "Madrid es la capital de España"
example_title: "Non ironic"
---
# RoBERTa base finetuned for Spanish irony detection
## Model description
Model to pe... | 660 |
tals/albert-base-vitaminc_wnei-fever | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
language: python
datasets:
- fever
- glue
- tals/vitaminc
---
# Details
Model used in [Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence](https://aclanthology.org/2021.naacl-main.52/) (Schuster et al., NAACL 21`).
For more details see: https://github.com/TalSchuster/VitaminC
When using this m... | 2,357 |
cambridgeltl/trans-encoder-cross-simcse-roberta-base | [
"LABEL_0"
] | Entry not found | 15 |
nickprock/distilbert-base-uncased-banking77-classification | [
"activate_my_card",
"age_limit",
"card_acceptance",
"card_arrival",
"card_delivery_estimate",
"card_linking",
"card_not_working",
"card_payment_fee_charged",
"card_payment_not_recognised",
"card_payment_wrong_exchange_rate",
"card_swallowed",
"cash_withdrawal_charge",
"apple_pay_or_google_pa... | ---
license: mit
tags:
- generated_from_trainer
datasets:
- banking77
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-banking77-classification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: banking77
type: banking77
args: defaul... | 7,157 |
sagorsarker/codeswitch-spaeng-sentiment-analysis-lince | null | ---
language:
- es
- en
datasets:
- lince
license: mit
tags:
- codeswitching
- spanish-english
- sentiment-analysis
---
# codeswitch-spaeng-sentiment-analysis-lince
This is a pretrained model for **Sentiment Analysis** of `spanish-english` code-mixed data used from [LinCE](https://ritual.uh.edu/lince/home)
This model... | 1,369 |
MoritzLaurer/DeBERTa-v3-base-mnli | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- en
tags:
- text-classification
- zero-shot-classification
metrics:
- accuracy
pipeline_tag: zero-shot-classification
---
# DeBERTa-v3-base-mnli-fever-anli
## Model description
This model was trained on the MultiNLI dataset, which consists of 392 702 NLI hypothesis-premise pairs.
The base model is [De... | 3,257 |
apple/ane-distilbert-base-uncased-finetuned-sst-2-english | [
"NEGATIVE",
"POSITIVE"
] | ---
language: en
license: apache-2.0
datasets:
- sst2
---
# DistilBERT optimized for Apple Neural Engine
This is the [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) model, optimized for the Apple Neural Engine (ANE) as described in the article ... | 2,322 |
cross-encoder/nli-deberta-v3-small | [
"contradiction",
"entailment",
"neutral"
] | ---
language: en
pipeline_tag: zero-shot-classification
tags:
- microsoft/deberta-v3-small
datasets:
- multi_nli
- snli
metrics:
- accuracy
license: apache-2.0
---
# Cross-Encoder for Natural Language Inference
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net... | 2,784 |
kornosk/bert-election2020-twitter-stance-biden-KE-MLM | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
language: "en"
tags:
- twitter
- stance-detection
- election2020
- politics
license: "gpl-3.0"
---
# Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Joe Biden (KE-MLM)
Pre-trained weights for **KE-MLM model** in [Knowledge Enhance Masked Language Model for Stance Detection](https://www.a... | 3,602 |
cambridgeltl/trans-encoder-cross-simcse-roberta-large | [
"LABEL_0"
] | Entry not found | 15 |
uer/roberta-base-finetuned-jd-binary-chinese | [
"negative (stars 1, 2 and 3)",
"positive (stars 4 and 5)"
] | ---
language: zh
widget:
- text: "这本书真的很不错"
---
# Chinese RoBERTa-Base Models for Text Classification
## Model description
This is the set of 5 Chinese RoBERTa-Base classification models fine-tuned by [UER-py](https://arxiv.org/abs/1909.05658). You can download the 5 Chinese RoBERTa-Base classification models eith... | 5,141 |
w11wo/indonesian-roberta-base-sentiment-classifier | [
"negative",
"neutral",
"positive"
] | ---
language: id
tags:
- indonesian-roberta-base-sentiment-classifier
license: mit
datasets:
- indonlu
widget:
- text: "Jangan sampai saya telpon bos saya ya!"
---
## Indonesian RoBERTa Base Sentiment Classifier
Indonesian RoBERTa Base Sentiment Classifier is a sentiment-text-classification model based on the [... | 2,842 |
edumunozsala/roberta_bne_sentiment_analysis_es | [
"Negativo",
"Positivo"
] | ---
language: es
tags:
- sagemaker
- roberta-bne
- TextClassification
- SentimentAnalysis
license: apache-2.0
datasets:
- IMDbreviews_es
metrics:
- accuracy
model-index:
- name: roberta_bne_sentiment_analysis_es
results:
- task:
name: Sentiment Analysis
type: sentiment-analysis
dataset:
... | 3,629 |
bespin-global/klue-roberta-small-3i4k-intent-classification | [
"command",
"fragment",
"intonation-depedent utterance",
"question",
"rhetorical command",
"rhetorical question",
"statement"
] | ---
language: ko
tags:
- intent-classification
datasets:
- kor_3i4k
license: cc-by-nc-4.0
---
## Finetuning
- Pretrain Model : [klue/roberta-small](https://github.com/KLUE-benchmark/KLUE)
- Dataset for fine-tuning : [3i4k](https://github.com/warnikchow/3i4k)
- Train : 46,863
- Validation : 8,271 (15% of Train)
... | 2,549 |
cambridgeltl/trans-encoder-cross-simcse-bert-base | [
"LABEL_0"
] | Entry not found | 15 |
cambridgeltl/sst_mobilebert-uncased | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Capreolus/bert-base-msmarco | null | # capreolus/bert-base-msmarco
## Model description
BERT-Base model (`google/bert_uncased_L-12_H-768_A-12`) fine-tuned on the MS MARCO passage classification task. It is intended to be used as a `ForSequenceClassification` model; see the [Capreolus BERT-MaxP implementation](https://github.com/capreolus-ir/capreolus/blo... | 778 |
patrickramosobf/bert-base-japanese-v2-wrime-fine-tune | [
"writer_joy",
"writer_sadness",
"reader_anticipation",
"reader_surprise",
"reader_anger",
"reader_fear",
"reader_disgust",
"reader_trust",
"writer_anticipation",
"writer_surprise",
"writer_anger",
"writer_fear",
"writer_disgust",
"writer_trust",
"reader_joy",
"reader_sadness"
] | ---
license: cc-by-sa-3.0
language:
- ja
tag:
- emotion-analysis
datasets:
- wrime
widget:
- text: "車のタイヤがパンクしてた。。いたずらの可能性が高いんだって。。"
---
# WRIME-fine-tuned BERT base Japanese
This model is a [Japanese BERT<sub>BASE</sub>](https://huggingface.co/cl-tohoku/bert-base-japanese-v2) fine-tuned on the [WRIME](https://github... | 3,030 |
tezign/BERT-LSTM-based-ABSA | [
"negative",
"neutral",
"positive"
] | ---
language: en
tags:
- aspect-term-sentiment-analysis
- pytorch
- ATSA
datasets:
- semeval2014
widget:
- text: "[CLS] The appearance is very nice, but the battery life is poor. [SEP] appearance [SEP] "
---
# Note
`Aspect term sentiment analysis`
BERT LSTM based baseline, based on https://github.com/avinashsai/BERT... | 1,443 |
akhooli/xlm-r-large-arabic-sent | [
"LABEL_0_mixed",
"LABEL_1_neg",
"LABEL_2_pos"
] | ---
language:
- ar
- en
license: mit
---
### xlm-r-large-arabic-sent
Multilingual sentiment classification (Label_0: mixed, Label_1: negative, Label_2: positive) of Arabic reviews by fine-tuning XLM-Roberta-Large.
Zero shot classification of other languages (also works in mixed languages - ex. Arabic & English). Mi... | 504 |
Abderrahim2/bert-finetuned-gender_classification | [
"female",
"male",
"undefined"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: bert-finetuned-gender_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then r... | 2,198 |
DemangeJeremy/4-sentiments-with-flaubert | [
"MIXED",
"NEGATIVE",
"OBJECTIVE",
"POSITIVE"
] | ---
language: fr
tags:
- sentiments
- text-classification
- flaubert
- french
- flaubert-large
---
# Modèle de détection de 4 sentiments avec FlauBERT (mixed, negative, objective, positive)
Les travaux sont actuellement en cours. Je modifierai le modèle ces prochains jours.
### Comment l'utiliser ?
```python
fro... | 1,432 |
michiyasunaga/LinkBERT-large | null | ---
license: apache-2.0
language: en
datasets:
- wikipedia
- bookcorpus
tags:
- bert
- exbert
- linkbert
- feature-extraction
- fill-mask
- question-answering
- text-classification
- token-classification
---
## LinkBERT-large
LinkBERT-large model pretrained on English Wikipedia articles along with ... | 3,547 |
tomh/toxigen_hatebert | null | ---
language:
- en
tags:
- text-classification
---
Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, Ece Kamar.
This model comes from the paper [ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection](https://arxiv.org/abs/2203.09509) and can... | 904 |
HooshvareLab/bert-fa-base-uncased-clf-digimag | [
"بازی ویدیویی",
"راهنمای خرید",
"سلامت و زیبایی",
"علم و تکنولوژی",
"عمومی",
"هنر و سینما",
"کتاب و ادبیات"
] | ---
language: fa
license: apache-2.0
---
# ParsBERT (v2.0)
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the [ParsBERT]... | 2,728 |
cambridgeltl/trans-encoder-cross-simcse-bert-large | [
"LABEL_0"
] | Entry not found | 15 |
Hate-speech-CNERG/dehatebert-mono-french | [
"NON_HATE",
"HATE"
] | ---
language: fr
license: apache-2.0
---
This model is used detecting **hatespeech** in **French language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rate... | 1,058 |
avichr/hebEMO_joy | null | # HebEMO - Emotion Recognition Model for Modern Hebrew
<img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250">
HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid... | 5,431 |
mrm8488/deberta-v3-large-finetuned-mnli | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- en
license: mit
widget:
- text: "She was badly wounded already. Another spear would take her down."
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: deberta-v3-large-mnli-2
results:
- task:
name: Text Classification
type: text-classification
da... | 3,019 |
shahrukhx01/gbert-germeval-2021 | null | ---
language: "de"
tags:
- hate-speech-classification
widget:
- text: "Als jemand, der im real existierenden Sozialismus aufgewachsen ist, kann ich über George Weineberg nur sagen, dass er ein Voll...t ist. Finde es schon gut, dass der eingeladen wurde. Hat gezeigt, dass er viel Meinung hat, aber offensichtlich we... | 1,407 |
dpalominop/spanish-bert-apoyo | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("dpalominop/spanish-bert-apoyo")
model = AutoModelForSequenceClassification.from_pretrained("dpalominop/spanish-bert-apoyo")
``` | 256 |
gchhablani/bert-base-cased-finetuned-mnli | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-cased-finetuned-mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: g... | 2,647 |
philschmid/distilbert-base-multilingual-cased-sentiment | [
"negative",
"neutral",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-multilingual-cased-sentiment
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type... | 2,574 |
helliun/primary_or_secondary_v3 | null | Entry not found | 15 |
baykenney/bert-base-gpt2detector-topk40 | [
"Human",
"Machine"
] | Entry not found | 15 |
cardiffnlp/bertweet-base-irony | null | 0 | |
finiteautomata/beto-headlines-sentiment-analysis | [
"NEG",
"NEU",
"POS"
] | # Targeted Sentiment Analysis in News Headlines
BERT classifier fine-tuned in a news headlines dataset annotated for target polarity.
(details to be published)
## Examples
Input is as follows
`Headline [SEP] Target`
where headline is the news title and target is an entity present in the headline.
Try
`Alberto ... | 502 |
Aniemore/rubert-tiny2-russian-emotion-detection | [
"anger",
"disgust",
"enthusiasm",
"fear",
"happiness",
"neutral",
"sadness"
] | ---
license: gpl-3.0
language: ["ru"]
tags:
- russian
- classification
- emotion
- emotion-detection
- emotion-recognition
- multiclass
widget:
- text: "Как дела?"
- text: "Дурак твой дед"
- text: "Только попробуй!!!"
- text: "Не хочу в школу("
- text: "Сейчас ровно час дня"
- text: "А ты уверен, что эти полоски снизу... | 3,810 |
moussaKam/barthez-sentiment-classification | null | ---
tags:
- text-classification
- bart
language:
- fr
license: apache-2.0
widget:
- text: Barthez est le meilleur gardien du monde.
---
### Barthez model finetuned on opinion classification task.
paper: https://arxiv.org/abs/2010.12321 \
github: https://github.com/moussaKam/BARThez
```
@article{eddine2020barthez,... | 545 |
yangheng/deberta-v3-large-absa-v1.1 | [
"Negative",
"Neutral",
"Positive"
] |
---
language:
- en
tags:
- aspect-based-sentiment-analysis
- PyABSA
license: mit
datasets:
- laptop14
- restaurant14
- restaurant16
- ACL-Twitter
- MAMS
- Television
- TShirt
- Yelp
metrics:
- accuracy
- macro-f1
widget:
- text: "[CLS] when tables opened up, the manager sat another party befor... | 3,146 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.