modelId stringlengths 6 107 | label list | readme stringlengths 0 56.2k | readme_len int64 0 56.2k |
|---|---|---|---|
Jeevesh8/init_bert_ft_qqp-41 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-39 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-50 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-66 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-71 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-73 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-72 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-74 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-75 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-76 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-77 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-78 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-80 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-79 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-81 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-84 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-90 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-83 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-98 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-82 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-85 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-86 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-96 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-99 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-92 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-91 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-89 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-88 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-97 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-93 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-87 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-94 | null | Entry not found | 15 |
Jeevesh8/init_bert_ft_qqp-95 | null | Entry not found | 15 |
bvanaken/CORe-clinical-diagnosis-prediction | [
"003",
"0030",
"0031",
"0038",
"0039",
"004",
"0041",
"0048",
"0049",
"005",
"0051",
"0058",
"0059",
"007",
"0071",
"0074",
"008",
"0080",
"0084",
"0085",
"0086",
"0088",
"009",
"0090",
"0091",
"0092",
"0093",
"010",
"0108",
"011",
"0112",
"0113",
"011... | ---
language: "en"
tags:
- bert
- medical
- clinical
- diagnosis
- text-classification
thumbnail: "https://core.app.datexis.com/static/paper.png"
widget:
- text: "Patient with hypertension presents to ICU."
---
# CORe Model - Clinical Diagnosis Prediction
## Model description
The CORe (_Clinical Outcome Representat... | 3,143 |
philschmid/BERT-Banking77 | [
"Refund_not_showing_up",
"activate_my_card",
"age_limit",
"apple_pay_or_google_pay",
"atm_support",
"automatic_top_up",
"balance_not_updated_after_bank_transfer",
"balance_not_updated_after_cheque_or_cash_deposit",
"beneficiary_not_allowed",
"cancel_transfer",
"card_about_to_expire",
"card_acc... | ---
tags: autotrain
language: en
widget:
- text: I am still waiting on my card?
datasets:
- banking77
model-index:
- name: BERT-Banking77
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: BANKING77
type: banking77
metrics:
- name: Accuracy
... | 3,006 |
uer/roberta-base-finetuned-dianping-chinese | [
"negative (stars 1, 2 and 3)",
"positive (stars 4 and 5)"
] | ---
language: zh
widget:
- text: "这本书真的很不错"
---
# Chinese RoBERTa-Base Models for Text Classification
## Model description
This is the set of 5 Chinese RoBERTa-Base classification models fine-tuned by [UER-py](https://arxiv.org/abs/1909.05658). You can download the 5 Chinese RoBERTa-Base classification models eith... | 5,141 |
Intel/distilbert-base-uncased-finetuned-sst-2-english-int8-static | [
"0",
"1"
] | ---
language: en
license: apache-2.0
tags:
- text-classfication
- int8
- Intel® Neural Compressor
- PostTrainingStatic
datasets:
- sst2
metrics:
- accuracy
---
# INT8 DistilBERT base uncased finetuned SST-2
### Post-training static quantization
This is an INT8 PyTorch model quantized with [Intel® Neural Compresso... | 1,095 |
nbroad/ESG-BERT | [
"Access_And_Affordability",
"Air_Quality",
"Business_Ethics",
"Business_Model_Resilience",
"Competitive_Behavior",
"Critical_Incident_Risk_Management",
"Customer_Privacy",
"Customer_Welfare",
"Data_Security",
"Director_Removal",
"Ecological_Impacts",
"Employee_Engagement_Inclusion_And_Diversit... | ---
language:
- en
tags:
- text-classification
- bert
- pytorch
license: apache-2.0
widget:
- text: "In fiscal year 2019, we reduced our comprehensive carbon footprint for the fourth consecutive year—down 35 percent compared to 2015, when Apple’s carbon emissions peaked, even as net revenue increased by 11 percent over... | 2,055 |
joeddav/distilbert-base-uncased-go-emotions-student | [
"admiration",
"amusement",
"anger",
"annoyance",
"approval",
"caring",
"confusion",
"curiosity",
"desire",
"disappointment",
"disapproval",
"disgust",
"embarrassment",
"excitement",
"fear",
"gratitude",
"grief",
"joy",
"love",
"nervousness",
"neutral",
"optimism",
"pride"... | ---
language: en
tags:
- text-classification
- pytorch
- tensorflow
datasets:
- go_emotions
license: mit
widget:
- text: "I feel lucky to be here."
---
# distilbert-base-uncased-go-emotions-student
## Model Description
This model is distilled from the zero-shot classification pipeline on the unlabeled GoEmotions dat... | 1,055 |
juliensimon/reviews-sentiment-analysis | null | ---
language:
- en
tags:
- distilbert
- sentiment-analysis
datasets:
- generated_reviews_enth
---
Distilbert model fine-tuned on English language product reviews
A notebook for Amazon SageMaker is available in the 'code' subfolder.
| 237 |
uer/roberta-base-finetuned-jd-full-chinese | [
"star 1",
"star 2",
"star 3",
"star 4",
"star 5"
] | ---
language: zh
widget:
- text: "这本书真的很不错"
---
# Chinese RoBERTa-Base Models for Text Classification
## Model description
This is the set of 5 Chinese RoBERTa-Base classification models fine-tuned by [UER-py](https://arxiv.org/abs/1909.05658). You can download the 5 Chinese RoBERTa-Base classification models eith... | 5,141 |
neuraly/bert-base-italian-cased-sentiment | [
"negative",
"neutral",
"positive"
] | ---
language: it
thumbnail: https://neuraly.ai/static/assets/images/huggingface/thumbnail.png
tags:
- sentiment
- Italian
license: mit
widget:
- text: Huggingface è un team fantastico!
---
# 🤗 + neuraly - Italian BERT Sentiment model
## Model description
This model performs sentiment analysis on Italian sente... | 3,691 |
mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis | [
"negative",
"neutral",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
- financial
- stocks
- sentiment
widget:
- text: "Operating profit totaled EUR 9.4 mn , down from EUR 11.7 mn in 2004 ."
datasets:
- financial_phrasebank
metrics:
- accuracy
model-index:
- name: distilRoberta-financial-sentiment
results:
- task:
name: Tex... | 2,044 |
cointegrated/rubert-base-cased-nli-threeway | [
"contradiction",
"entailment",
"neutral"
] | ---
language: ru
pipeline_tag: zero-shot-classification
tags:
- rubert
- russian
- nli
- rte
- zero-shot-classification
widget:
- text: "Я хочу поехать в Австралию"
candidate_labels: "спорт,путешествия,музыка,кино,книги,наука,политика"
hypothesis_template: "Тема текста - {}."
---
# RuBERT for NLI (natural language... | 6,160 |
smilegate-ai/kor_unsmile | [
"clean",
"기타 혐오",
"남성",
"성소수자",
"악플/욕설",
"여성/가족",
"연령",
"인종/국적",
"종교",
"지역"
] | Entry not found | 15 |
cross-encoder/ms-marco-TinyBERT-L-6 | [
"LABEL_0"
] | ---
license: apache-2.0
---
# Cross-Encoder for MS Marco
This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch).... | 3,233 |
cardiffnlp/bertweet-base-offensive | null | 0 | |
hackathon-pln-es/jurisbert-clas-art-convencion-americana-dh | [
"Artículo_29_Normas_de_Interpretación",
"Artículo 25. Protección Judicial",
"Artículo 3. Derecho al Reconocimiento de la Personalidad Jurídica",
"Artículo 13. Libertad de Pensamiento y de Expresión",
"Artículo 7. Derecho a la Libertad Personal",
"Artículo 63.1 Reparaciones",
"Artículo 30. Alcance de las... | ---
license: cc-by-nc-4.0
language: es
widget:
- text: "ADOPCIÓN. EL INTERÉS SUPERIOR DEL MENOR DE EDAD SE BASA EN LA IDONEIDAD DE LOS ADOPTANTES, DENTRO DE LA CUAL SON IRRELEVANTES EL TIPO DE FAMILIA AL QUE AQUÉL SERÁ INTEGRADO, ASÍ COMO LA ORIENTACIÓN SEXUAL O EL ESTADO CIVIL DE ÉSTOS."
---
## Descripción del... | 2,689 |
lingwave-admin/state-op-detector | null | ---
language:
- en
tags:
- classification
license: apache-2.0
widget:
- text: "Zimbabwe has all the Brilliant Minds to become the Next Dubai of Africa No wonder why is so confide | Invest Surplus yako iye into Healthcare that will save lives amp creat real Jobs in Healthcare Sector | To the African Diaspora in Americas... | 7,657 |
bhadresh-savani/bert-base-uncased-emotion | [
"anger",
"fear",
"joy",
"love",
"sadness",
"surprise"
] | ---
language:
- en
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
tags:
- text-classification
- emotion
- pytorch
license: apache-2.0
datasets:
- emotion
metrics:
- Accuracy, F1 Score
model-index:
- name: bhadresh-savani/bert-base-uncased-emotion
resu... | 3,979 |
Jiva/xlm-roberta-large-it-mnli | [
"contradiction",
"entailment",
"neutral"
] | ---
language: it
tags:
- text-classification
- pytorch
- tensorflow
datasets:
- multi_nli
- glue
license: mit
pipeline_tag: zero-shot-classification
widget:
- text: "La seconda guerra mondiale vide contrapporsi, tra il 1939 e il 1945, le cosiddette potenze dell'Asse e gli Alleati che, come già accaduto ai belligeranti ... | 5,995 |
obsei-ai/sell-buy-intent-classifier-bert-mini | null | ---
language: "en"
tags:
- buy-intent
- sell-intent
- consumer-intent
widget:
- text: "Can you please share pictures for Face Shields ? We are looking for large quantity pcs"
---
# Buy vs Sell Intent Classifier
| Train Loss | Validation Acc.| Test Acc.|
| ------------- |:-------------: | -----: |
| 0.013 | 0.... | 1,983 |
nateraw/bert-base-uncased-emotion | [
"anger",
"fear",
"joy",
"love",
"sadness",
"surprise"
] | ---
language:
- en
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
tags:
- text-classification
- emotion
- pytorch
license: apache-2.0
datasets:
- emotion
metrics:
- accuracy
---
# bert-base-uncased-emotion
## Model description
`bert-base-uncased` fi... | 1,136 |
textattack/bert-base-uncased-QQP | null | Entry not found | 15 |
valurank/distilroberta-bias | [
"BIASED",
"NEUTRAL"
] | ---
license: other
language: en
datasets:
- valurank/wikirev-bias
---
# DistilROBERTA fine-tuned for bias detection
This model is based on [distilroberta-base](https://huggingface.co/distilroberta-base) pretrained weights, with a classification head fine-tuned to classify text into 2 categories (neutral, biased).
## ... | 686 |
Alireza1044/albert-base-v2-sst2 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model_index:
- name: sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metric:
name: Accuracy
... | 1,371 |
sgugger/tiny-distilbert-classification | [
"NEGATIVE",
"POSITIVE"
] | Entry not found | 15 |
finiteautomata/beto-emotion-analysis | [
"anger",
"disgust",
"fear",
"joy",
"others",
"sadness",
"surprise"
] | ---
language:
- es
tags:
- emotion-analysis
---
# Emotion Analysis in Spanish
## beto-emotion-analysis
Repository: [https://github.com/finiteautomata/pysentimiento/](https://github.com/finiteautomata/pysentimiento/)
Model trained with TASS 2020 Task 2 corpus for Emotion detection in Spanish. Base model is [B... | 1,567 |
deepset/gbert-base-germandpr-reranking | [
"0",
"1"
] | ---
language: de
datasets:
- deepset/germandpr
license: mit
---
## Overview
**Language model:** gbert-base-germandpr-reranking
**Language:** German
**Training data:** GermanDPR train set (~ 56MB)
**Eval data:** GermanDPR test set (~ 6MB)
**Infrastructure**: 1x V100 GPU
**Published**: June 3rd, 2021
## Deta... | 3,221 |
peerapongch/baikal-sentiment-ball | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
shahrukhx01/roberta-base-boolq | null | ---
language: "en"
tags:
- boolean-qa
widget:
- text: "Is Berlin the smallest city of Germany? <s> Berlin is the capital and largest city of Germany by both area and population. Its 3.8 million inhabitants make it the European Union's most populous city, according to the population within city limits "
---
# Labels Ma... | 1,423 |
CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment | [
"negative",
"neutral",
"positive"
] | ---
language:
- ar
license: apache-2.0
widget:
- text: "أنا بخير"
---
# CAMeLBERT Mix SA Model
## Model description
**CAMeLBERT Mix SA Model** is a Sentiment Analysis (SA) model that was built by fine-tuning the [CAMeLBERT Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuni... | 3,348 |
microsoft/DialogRPT-human-vs-rand | null | # Demo
Please try this [➤➤➤ Colab Notebook Demo (click me!)](https://colab.research.google.com/drive/1cAtfkbhqsRsT59y3imjR1APw3MHDMkuV?usp=sharing)
| Context | Response | `human_vs_rand` score |
| :------ | :------- | :------------: |
| I love NLP! | He is a great basketball player. | 0.027 |
| I love NLP! | C... | 2,721 |
AkshatSurolia/ICD-10-Code-Prediction | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_100",
"LABEL_1000",
"LABEL_10000",
"LABEL_10001",
"LABEL_10002",
"LABEL_10003",
"LABEL_10004",
"LABEL_10005",
"LABEL_10006",
"LABEL_10007",
"LABEL_10008",
"LABEL_10009",
"LABEL_1001",
"LABEL_10010",
"LABEL_10011",
"LABEL_10012",
"LABEL_1... | ---
license: apache-2.0
tags:
- text-classification
datasets:
- Mimic III
---
# Clinical BERT for ICD-10 Prediction
The Publicly Available Clinical BERT Embeddings paper contains four unique clinicalBERT models: initialized with BERT-Base (cased_L-12_H-768_A-12) or BioBERT (BioBERT-Base v1.0 + PubMed 200K +... | 1,190 |
MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- en
tags:
- text-classification
- zero-shot-classification
license: mit
metrics:
- accuracy
datasets:
- multi_nli
- anli
- fever
- lingnli
- alisawuffles/WANLI
pipeline_tag: zero-shot-classification
#- text-classification
#widget:
#- text: "I first thought that I really liked the movie, but upon second ... | 10,699 |
textattack/roberta-base-MNLI | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
cointegrated/rubert-tiny2-cedr-emotion-detection | [
"anger",
"fear",
"joy",
"no_emotion",
"sadness",
"surprise"
] | ---
language: ["ru"]
tags:
- russian
- classification
- sentiment
- emotion-classification
- multiclass
datasets:
- cedr
widget:
- text: "Бесишь меня, падла"
- text: "Как здорово, что все мы здесь сегодня собрались"
- text: "Как-то стрёмно, давай свалим отсюда?"
- text: "Грусть-тоска меня съедает"
- text: "Данный фрагм... | 1,680 |
vlsb/autotrain-security-texts-classification-distilroberta-688220764 | [
"irrelevant",
"relevant"
] | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- vlsb/autotrain-data-security-texts-classification-distilroberta
co2_eq_emissions: 2.0817207656772445
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 688220764
- CO2 Emissions (in grams): 2.0817207... | 1,298 |
michiyasunaga/BioLinkBERT-base | null | ---
license: apache-2.0
language: en
datasets:
- pubmed
tags:
- bert
- exbert
- linkbert
- biolinkbert
- feature-extraction
- fill-mask
- question-answering
- text-classification
- token-classification
widget:
- text: "Sunitinib is a tyrosine kinase inhibitor"
---
## BioLinkBERT-base
BioLinkBERT-... | 3,823 |
SkolkovoInstitute/roberta_toxicity_classifier | [
"neutral",
"toxic"
] | ---
language:
- en
tags:
- toxic comments classification
licenses:
- cc-by-nc-sa
---
## Toxicity Classification Model
This model is trained for toxicity classification task. The dataset used for training is the merge of the English parts of the three datasets by **Jigsaw** ([Jigsaw 2018](https://www.kaggle.com/c/jigs... | 1,653 |
salesken/query_wellformedness_score | [
"LABEL_0"
] | ---
tags: salesken
license: apache-2.0
inference: true
datasets: google_wellformed_query
widget:
- text: "what was the reason for everyone for leave the company"
---
This model evaluates the wellformedness (non-fragment, grammatically correct) score of a sentence. Model is case-sensitive and penalises for incorrec... | 1,431 |
blanchefort/rubert-base-cased-sentiment-rusentiment | [
"NEUTRAL",
"POSITIVE",
"NEGATIVE"
] | ---
language:
- ru
tags:
- sentiment
- text-classification
datasets:
- RuSentiment
---
# RuBERT for Sentiment Analysis
This is a [DeepPavlov/rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model trained on [RuSentiment](http://text-machine.cs.uml.edu/projects/ruse... | 1,362 |
michaelrglass/albert-base-rci-wikisql-row | null | Entry not found | 15 |
philschmid/MiniLM-L6-H384-uncased-sst2 | null | Entry not found | 15 |
michaelrglass/albert-base-rci-wikisql-col | null | Entry not found | 15 |
aychang/roberta-base-imdb | [
"neg",
"pos"
] | ---
language:
- en
thumbnail:
tags:
- text-classification
license: mit
datasets:
- imdb
metrics:
---
# IMDB Sentiment Task: roberta-base
## Model description
A simple base roBERTa model trained on the "imdb" dataset.
## Intended uses & limitations
#### How to use
##### Transformers
```python
# Load model and ... | 2,147 |
jaehyeong/koelectra-base-v3-generalized-sentiment-analysis | [
"0",
"1"
] | # Usage
```python
# import library
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline
# load model
tokenizer = AutoTokenizer.from_pretrained("jaehyeong/koelectra-base-v3-generalized-sentiment-analysis")
model = AutoModelForSequenceClassification.from_pre... | 1,675 |
mrm8488/codebert-base-finetuned-detect-insecure-code | null | ---
language: en
datasets:
- codexglue
---
# CodeBERT fine-tuned for Insecure Code Detection 💾⛔
[codebert-base](https://huggingface.co/microsoft/codebert-base) fine-tuned on [CodeXGLUE -- Defect Detection](https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Defect-detection) dataset for **Insecure Code Detec... | 3,815 |
textattack/bert-base-uncased-rotten-tomatoes | null | ## TextAttack Model Card
This `bert-base-uncased` model was fine-tuned for sequence classificationusing TextAttack
and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned
for 10 epochs with a batch size of 16, a learning
rate of 2e-05, and a maximum sequence lengt... | 673 |
textattack/distilbert-base-cased-CoLA | null | Entry not found | 15 |
microsoft/deberta-v2-xlarge-mnli | [
"CONTRADICTION",
"NEUTRAL",
"ENTAILMENT"
] | ---
language: en
tags:
- deberta
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
widget:
- text: "[CLS] I love you. [SEP] I like you. [SEP]"
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves... | 3,956 |
prajjwal1/bert-medium-mnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). These BERT variants were introduced in the paper [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](... | 996 |
DaNLP/da-bert-tone-sentiment-polarity | [
"positive",
"neutral",
"negative"
] | ---
language:
- da
tags:
- bert
- pytorch
- sentiment
- polarity
license: cc-by-sa-4.0
datasets:
- Twitter Sentiment
- Europarl Sentiment
metrics:
- f1
widget:
- text: Det er super godt
---
# Danish BERT Tone for sentiment polarity detection
The BERT Tone model detects sentiment polarity (positive, neutral or negativ... | 1,182 |
IDEA-CCNL/Erlangshen-Roberta-110M-Similarity | null | ---
language:
- zh
license: apache-2.0
tags:
- bert
- NLU
- NLI
inference: true
widget:
- text: "今天心情不好[SEP]今天很开心"
---
# Erlangshen-Roberta-110M-Similarity, model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
We collect 20 paraphrace datasets in the Chinese domain for f... | 1,626 |
textattack/bert-base-uncased-MNLI | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
mrsinghania/asr-question-detection | null | <i>Question vs Statement classifier</i> trained on more than 7k samples which were coming from spoken data in an interview setting
<b>Code for using in Transformers:</b>
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("mrsinghania/asr-question-de... | 427 |
textattack/roberta-base-QNLI | null | Entry not found | 15 |
uer/roberta-base-finetuned-chinanews-chinese | [
"mainland China politics",
"Hong Kong - Macau politics",
"International news",
"financial news",
"culture",
"entertainment",
"sports"
] | ---
language: zh
widget:
- text: "这本书真的很不错"
---
# Chinese RoBERTa-Base Models for Text Classification
## Model description
This is the set of 5 Chinese RoBERTa-Base classification models fine-tuned by [UER-py](https://arxiv.org/abs/1909.05658). You can download the 5 Chinese RoBERTa-Base classification models eith... | 5,141 |
michiyasunaga/BioLinkBERT-large | null | ---
license: apache-2.0
language: en
datasets:
- pubmed
tags:
- bert
- exbert
- linkbert
- biolinkbert
- feature-extraction
- fill-mask
- question-answering
- text-classification
- token-classification
widget:
- text: "Sunitinib is a tyrosine kinase inhibitor"
---
## BioLinkBERT-large
BioLinkBERT... | 3,827 |
IDEA-CCNL/Erlangshen-Roberta-110M-Sentiment | null | ---
language:
- zh
license: apache-2.0
tags:
- bert
- NLU
- Sentiment
- Chinese
inference: true
widget:
- text: "今天心情不好"
---
# Erlangshen-Roberta-110M-Semtiment, model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
We collect 8 sentiment datasets in the Chinese domain fo... | 1,545 |
cross-encoder/qnli-distilroberta-base | [
"LABEL_0"
] | ---
license: apache-2.0
---
# Cross-Encoder for Quora Duplicate Questions Detection
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
## Training Data
Given a question and paragraph, can the question be a... | 1,996 |
dennlinger/roberta-cls-consec | [
"LABEL_0",
"LABEL_1"
] | # About this model: Topical Change Detection in Documents
This network has been fine-tuned for the task described in the paper *Topical Change Detection in Documents via Embeddings of Long Sequences* and is our best-performing base-transformer model. You can find more detailed information in our GitHub page for the pap... | 1,995 |
philschmid/tiny-bert-sst2-distilled | [
"negative",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: tiny-bert-sst2-distilled
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy... | 2,036 |
mdhugol/indonesia-bert-sentiment-classification | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Indonesian BERT Base Sentiment Classifier is a sentiment-text-classification model. The model was originally the pre-trained [IndoBERT Base Model (phase1 - uncased)](https://huggingface.co/indobenchmark/indobert-base-p1) model using [Prosa sentiment dataset](https://github.com/indobenchmark/indonlu/tree/master/dataset/... | 1,299 |
bergum/xtremedistil-l6-h384-go-emotion | [
"admiration 👏",
"amusement 😂",
"anger 😡",
"annoyance 😒",
"approval 👍",
"caring 🤗",
"confusion 😕",
"curiosity 🤔",
"desire 😍",
"disappointment 😞",
"disapproval 👎",
"disgust 🤮",
"embarrassment 😳",
"excitement 🤩",
"fear 😨",
"gratitude 🙏",
"grief 😢",
"joy 😃",
"love ❤... | ---
license: apache-2.0
datasets:
- go_emotions
metrics:
- accuracy
model-index:
- name: xtremedistil-emotion
results:
- task:
name: Multi Label Text Classification
type: multi_label_classification
dataset:
name: go_emotions
type: emotion
args: default
metrics:
- name: Acc... | 1,646 |
DaNLP/da-bert-emotion-classification | [
"Foragt/Modvilje",
"Forventning/Interrese",
"Frygt/Bekymret",
"Glæde/Sindsro",
"Overasket/Målløs",
"Sorg/trist",
"Tillid/Accept",
"Vrede/Irritation"
] | ---
language:
- da
tags:
- bert
- pytorch
- emotion
license: cc-by-sa-4.0
datasets:
- social media
metrics:
- f1
widget:
- text: Jeg ejer en rød bil og det er en god bil.
---
# Danish BERT for emotion classification
The BERT Emotion model classifies a Danish text in one of the following class:
* Glæde/Sindsro
* Tilli... | 1,385 |
pedropei/sentence-level-certainty | [
"LABEL_0"
] | Entry not found | 15 |
aloxatel/bert-base-mnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
DaNLP/da-electra-hatespeech-detection | [
"not offensive",
"offensive"
] | ---
language:
- da
tags:
- electra
- pytorch
- hatespeech
license: cc-by-4.0
datasets:
- social media
metrics:
- f1
widget:
- text: "Senile gamle idiot"
---
# Danish ELECTRA for hate speech (offensive language) detection
The ELECTRA Offensive model detects whether a Danish text is offensive or not.
It is based on th... | 1,022 |
aypan17/roberta-base-imdb | null | ---
license: mit
---
TrainingArgs:
lr=2e-5,
train-batch-size=16,
eval-batch-size=16,
num-train-epochs=5,
weight-decay=0.01,
| 135 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.