text
stringlengths
21
2.11k
label
class label
2 classes
DreamBooth model for the britazzleshorg concept trained by Nlpeva on the Nlpeva/British_shorthair dataset. This is a Stable Diffusion model fine-tuned on the britazzleshorg concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of britazzleshorg cat** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
0dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Training data We use Chinese and Japanese Wikipedia to train the model.
0dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Training procedure
1no_dataset_mention
Training procedure
1no_dataset_mention
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
1no_dataset_mention
Training data The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers).
0dataset_mention
sentiment-model-on-imdb-dataset This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3694 - Accuracy: 0.85 - F1: 0.8544
0dataset_mention
Training procedure
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
mini-mlm-tweet-target-imdb This model is a fine-tuned version of [muhtasham/mini-mlm-tweet](https://huggingface.co/muhtasham/mini-mlm-tweet) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.4742 - Accuracy: 0.8324 - F1: 0.9085
0dataset_mention
Training procedure
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Training procedure
1no_dataset_mention
Information Retrieval You can also use **customized embeddings** for information retrieval. ```python import numpy as np from sklearn.metrics.pairwise import cosine_similarity query = [['Represent the Wikipedia question for retrieving supporting documents: ','where is the food stored in a yam plant']] corpus = [['Represent the Wikipedia document for retrieval: ','Capitalism has been dominant in the Western world since the end of feudalism, but most feel[who?] that the term "mixed economies" more precisely describes most contemporary economies, due to their containing both private-owned and state-owned enterprises. In capitalism, prices determine the demand-supply scale. For example, higher demand for certain goods and services lead to higher prices and lower demand for certain goods lead to lower prices.'], ['Represent the Wikipedia document for retrieval: ',"The disparate impact theory is especially controversial under the Fair Housing Act because the Act regulates many activities relating to housing, insurance, and mortgage loans—and some scholars have argued that the theory's use under the Fair Housing Act, combined with extensions of the Community Reinvestment Act, contributed to rise of sub-prime lending and the crash of the U.S. housing market and ensuing global economic recession"], ['Represent the Wikipedia document for retrieval: ','Disparate impact in United States labor law refers to practices in employment, housing, and other areas that adversely affect one group of people of a protected characteristic more than another, even though rules applied by employers or landlords are formally neutral. Although the protected classes vary by statute, most federal civil rights laws protect based on race, color, religion, national origin, and sex as protected traits, and some laws include disability status and other traits as well.']] query_embeddings = model.encode(query) corpus_embeddings = model.encode(corpus) similarities = cosine_similarity(query_embeddings,corpus_embeddings) retrieved_doc_id = np.argmax(similarities) print(retrieved_doc_id) ```
0dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding one or more classification heads on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on a downstream task.
0dataset_mention
Training procedure
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
1no_dataset_mention
Training procedure
1no_dataset_mention
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
1no_dataset_mention
Limitations and bias This model is trained on WikiNEuRal, a state-of-the-art dataset for Multilingual NER automatically derived from Wikipedia. Therefore, it may not generalize well on all textual genres (e.g. news). On the other hand, models trained only on news articles (e.g. only on CoNLL03) have been proven to obtain much lower scores on encyclopedic articles. To obtain a more robust system, we encourage to train a system on the combination of WikiNEuRal + CoNLL.
0dataset_mention
Training procedure
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding one or more classification heads on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on a downstream task.
0dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
0dataset_mention
Citation Information ```bibtex @inproceedings{tedeschi-etal-2021-wikineural-combined, title = "{W}iki{NE}u{R}al: {C}ombined Neural and Knowledge-based Silver Data Creation for Multilingual {NER}", author = "Tedeschi, Simone and Maiorca, Valentino and Campolungo, Niccol{\`o} and Cecconi, Francesco and Navigli, Roberto", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021", month = nov, year = "2021", address = "Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-emnlp.215", pages = "2521--2533", abstract = "Multilingual Named Entity Recognition (NER) is a key intermediate task which is needed in many areas of NLP. In this paper, we address the well-known issue of data scarcity in NER, especially relevant when moving to a multilingual scenario, and go beyond current approaches to the creation of multilingual silver data for the task. We exploit the texts of Wikipedia and introduce a new methodology based on the effective combination of knowledge-based approaches and neural models, together with a novel domain adaptation technique, to produce high-quality training corpora for NER. We evaluate our datasets extensively on standard benchmarks for NER, yielding substantial improvements up to 6 span-based F1-score points over previous state-of-the-art systems for data creation.", } ```
0dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
bert-base-uncased-finetuned-imdb This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2887
0dataset_mention
Training procedure
1no_dataset_mention
Overview BERT (Bidirectional Encoder Representations from Transformers) provides dense vector representations for natural language by using a deep, pre-trained neural network with the Transformer architecture. It was originally published by Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova: ["BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"](https://arxiv.org/abs/1810.04805), 2018. This model uses the implementation of BERT from the TensorFlow Models repository on GitHub at [tensorflow/models/official/legacy/bert](https://github.com/tensorflow/models/tree/master/official/legacy/bert). It uses L=12 hidden layers (i.e., Transformer blocks), a hidden size of H=768, and A=12 attention heads. For other model sizes, see the [BERT](https://tfhub.dev/google/collections/bert/1) collection. The weights of this model are those released by the original BERT authors. This model has been pre-trained for English on the Wikipedia and BooksCorpus. Text inputs have been normalized the "cased" way, meaning that the distinction between lower and upper case as well as accent markers have been preserved. For training, random input masking has been applied independently to word pieces (as in the original BERT paper). All parameters in the module are trainable, and fine-tuning all parameters is the recommended practice.
0dataset_mention
Training procedure
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
As Text Classifier ```python from transformers import pipeline pretrained_name = "w11wo/javanese-gpt2-small-imdb-classifier" nlp = pipeline( "sentiment-analysis", model=pretrained_name, tokenizer=pretrained_name ) nlp("Film sing apik banget!") ```
0dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
anglicisms-spanish-flair-cs This is a pretrained model for detecting unassimilated English lexical borrowings (a.k.a. anglicisms) on Spanish newswire. This model labels words of foreign origin (fundamentally from English) used in Spanish language, words such as *fake news*, *machine learning*, *smartwatch*, *influencer* or *streaming*. The model is a BiLSTM-CRF model fed with [Transformer-based embeddings pretrained on codeswitched data](https://huggingface.co/sagorsarker/codeswitch-spaeng-lid-lince) along subword embeddings (BPE and character embeddings). The model was trained on the [COALAS](https://github.com/lirondos/coalas/) corpus for the task of detecting lexical borrowings. The model considers two labels: * ``ENG``: For English lexical borrowings (*smartphone*, *online*, *podcast*) * ``OTHER``: For lexical borrowings from any other language (*boutique*, *anime*, *umami*) The model uses BIO encoding to account for multitoken borrowings. **⚠ There is another [mBERT -based model](https://huggingface.co/lirondos/anglicisms-spanish-mbert) for this same task trained using the ``Transformers`` library**. That model however produced worse results than this Flair-based model (F1 = 83.55).
0dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Training data The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers).
0dataset_mention
Training procedure
1no_dataset_mention
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
1no_dataset_mention
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Training procedure
1no_dataset_mention
Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding one or more classification heads on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on a downstream task.
0dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
bart-cnn-science-v3-e4 This model is a fine-tuned version of [theojolliffe/bart-cnn-science](https://huggingface.co/theojolliffe/bart-cnn-science) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8265 - Rouge1: 53.0296 - Rouge2: 33.4957 - Rougel: 35.8876 - Rougelsum: 50.0786 - Gen Len: 141.5926
0dataset_mention
Author Javanese GPT-2 Small IMDB Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
0dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Training procedure
1no_dataset_mention
Training procedure
1no_dataset_mention
Training data The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers).
0dataset_mention
Training procedure
1no_dataset_mention
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
1no_dataset_mention
Training procedure
1no_dataset_mention
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
1no_dataset_mention
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
1no_dataset_mention
This is a pretrained-from-scratch **T5v1.1 base** model (**247M** parameters) on the [t5x](https://github.com/google-research/t5x) platform. Training was performed on a clean 80GB Romanian text corpus for 4M steps with these [scripts](https://github.com/dumitrescustefan/t5x_models). The model was trained with an encoder sequence length of 512 and a decoder sequence length of 256. **!! IMPORTANT !!** This model was pretrained on the span corruption MLM task, meaning this model is **not usable** in any downstream task **without finetuning** first!
0dataset_mention
Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
0dataset_mention
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
1no_dataset_mention
Training data The Meta AI team wanted to train this model on a corpus as large as possible. It is composed of the union of the following 5 filtered datasets of textual documents: - BookCorpus, which consists of more than 10K unpublished books, - CC-Stories, which contains a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas, - The Pile, from which * Pile-CC, OpenWebText2, USPTO, Project Gutenberg, OpenSubtitles, Wikipedia, DM Mathematics and HackerNews* were included. - Pushshift.io Reddit dataset that was developed in Baumgartner et al. (2020) and processed in Roller et al. (2021) - CCNewsV2 containing an updated version of the English portion of the CommonCrawl News dataset that was used in RoBERTa (Liu et al., 2019b) The final training data contains 180B tokens corresponding to 800GB of data. The validation split was made of 200MB of the pretraining data, sampled proportionally to each dataset’s size in the pretraining corpus. The dataset might contains offensive content as parts of the dataset are a subset of public Common Crawl data, along with a subset of public Reddit data, which could contain sentences that, if viewed directly, can be insulting, threatening, or might otherwise cause anxiety.
0dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
RoBERTa, Intermediate Checkpoint - Epoch 17 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_17.
0dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Romanian paraphrase ![v1.0](https://img.shields.io/badge/V.1-03.08.2022-brightgreen) Fine-tune t5-small model for paraphrase. Since there is no Romanian dataset for paraphrasing, I had to create my own [dataset](https://huggingface.co/datasets/BlackKakapo/paraphrase-ro-v1). The dataset contains ~60k examples.
0dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Training procedure
1no_dataset_mention
Training procedure
1no_dataset_mention
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
1no_dataset_mention
Training procedure
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
1no_dataset_mention
Training procedure
1no_dataset_mention
Training procedure
1no_dataset_mention
Training procedure
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Training procedure
1no_dataset_mention
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
1no_dataset_mention