modelId
stringlengths 4
112
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 21
values | files
sequence | publishedBy
stringlengths 2
37
| downloads_last_month
int32 0
9.44M
| library
stringclasses 15
values | modelCard
large_stringlengths 0
100k
|
---|---|---|---|---|---|---|---|---|
CogComp/bart-faithful-summary-detector | 2021-06-13T17:18:36.000Z | [
"pytorch",
"jax",
"bart",
"text-classification",
"en",
"dataset:xsum",
"transformers",
"xsum",
"license:cc-by-sa-4.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | CogComp | 127 | transformers | ---
language:
- en
thumbnail: https://cogcomp.seas.upenn.edu/images/logo.png
tags:
- text-classification
- bart
- xsum
license: cc-by-sa-4.0
datasets:
- xsum
widget:
- text: "<s> Ban Ki-moon was elected for a second term in 2007. </s></s> Ban Ki-Moon was re-elected for a second term by the UN General Assembly, unopposed and unanimously, on 21 June 2011."
- text: "<s> Ban Ki-moon was elected for a second term in 2011. </s></s> Ban Ki-Moon was re-elected for a second term by the UN General Assembly, unopposed and unanimously, on 21 June 2011."
---
# bart-faithful-summary-detector
## Model description
A BART (base) model trained to classify whether a summary is *faithful* to the original article. See our [paper in NAACL'21](https://www.seas.upenn.edu/~sihaoc/static/pdf/CZSR21.pdf) for details.
## Usage
Concatenate a summary and a source document as input (note that the summary needs to be the **first** sentence).
Here's an example usage (with PyTorch)
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("CogComp/bart-faithful-summary-detector")
model = AutoModelForSequenceClassification.from_pretrained("CogComp/bart-faithful-summary-detector")
article = "Ban Ki-Moon was re-elected for a second term by the UN General Assembly, unopposed and unanimously, on 21 June 2011."
bad_summary = "Ban Ki-moon was elected for a second term in 2007."
good_summary = "Ban Ki-moon was elected for a second term in 2011."
bad_pair = tokenizer(text=bad_summary, text_pair=article, return_tensors='pt')
good_pair = tokenizer(text=good_summary, text_pair=article, return_tensors='pt')
bad_score = model(**bad_pair)
good_score = model(**good_pair)
print(good_score[0][:, 1] > bad_score[0][:, 1]) # True, label mapping: "0" -> "Hallucinated" "1" -> "Faithful"
```
### BibTeX entry and citation info
```bibtex
@inproceedings{CZSR21,
author = {Sihao Chen and Fan Zhang and Kazoo Sone and Dan Roth},
title = {{Improving Faithfulness in Abstractive Summarization with Contrast Candidate Generation and Selection}},
booktitle = {NAACL},
year = {2021}
}
``` |
ComCom-Dev/gpt2-bible-test | 2021-04-28T06:38:53.000Z | [] | [
".gitattributes"
] | ComCom-Dev | 0 | |||
Connor-tech/bert_cn_finetuning | 2021-05-18T17:47:09.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | Connor-tech | 54 | transformers | |
Contrastive-Tension/BERT-Base-CT-STSb | 2021-05-18T17:48:15.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"transformers"
] | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Contrastive-Tension | 21 | transformers | ||
Contrastive-Tension/BERT-Base-CT | 2021-05-18T17:49:20.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Contrastive-Tension | 21 | transformers | |
Contrastive-Tension/BERT-Base-NLI-CT | 2021-05-18T17:50:20.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Contrastive-Tension | 8 | transformers | |
Contrastive-Tension/BERT-Base-Swe-CT-STSb | 2021-05-18T17:51:43.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"transformers"
] | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Contrastive-Tension | 221 | transformers | ||
Contrastive-Tension/BERT-Distil-CT-STSb | 2021-02-23T19:38:16.000Z | [
"pytorch",
"tf",
"distilbert",
"transformers"
] | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Contrastive-Tension | 10 | transformers | ||
Contrastive-Tension/BERT-Distil-CT | 2021-02-10T19:01:42.000Z | [
"pytorch",
"tf",
"distilbert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Contrastive-Tension | 11 | transformers | |
Contrastive-Tension/BERT-Distil-NLI-CT | 2021-02-10T19:24:22.000Z | [
"pytorch",
"tf",
"distilbert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Contrastive-Tension | 11 | transformers | |
Contrastive-Tension/BERT-Large-CT-STSb | 2021-05-18T17:56:58.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"transformers"
] | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Contrastive-Tension | 10,276 | transformers | ||
Contrastive-Tension/BERT-Large-CT | 2021-05-18T18:00:51.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Contrastive-Tension | 11 | transformers | |
Contrastive-Tension/BERT-Large-NLI-CT | 2021-05-18T18:04:22.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Contrastive-Tension | 16 | transformers | |
Contrastive-Tension/RoBerta-Large-CT-STSb | 2021-05-20T11:41:18.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"transformers"
] | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
] | Contrastive-Tension | 20 | transformers | ||
Cooker/cicero-similis | 2021-02-24T00:13:22.000Z | [] | [
".gitattributes"
] | Cooker | 0 | |||
Coolhand/Abuela | 2021-05-27T20:25:00.000Z | [
"en",
"image_restoration",
"superresolution",
"license:mit license"
] | [
".gitattributes",
"README.md"
] | Coolhand | 0 | ---
language:
- en
thumbnail: https://github.com/Nick-Harvey/for_my_abuela/blob/master/cuban_large.jpg
tags:
- image_restoration
- superresolution
license: MIT License
datasets:
metrics:
---
@inproceedings{wan2020bringing,
title={Bringing Old Photos Back to Life},
author={Wan, Ziyu and Zhang, Bo and Chen, Dongdong and Zhang, Pan and Chen, Dong and Liao, Jing and Wen, Fang},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={2747--2757},
year={2020}
}
@article{wan2020old,
title={Old Photo Restoration via Deep Latent Space Translation},
author={Wan, Ziyu and Zhang, Bo and Chen, Dongdong and Zhang, Pan and Chen, Dong and Liao, Jing and Wen, Fang},
journal={arXiv preprint arXiv:2009.07047},
year={2020}
}
|
||
Coolhand/Sentiment | 2021-05-18T00:33:41.000Z | [] | [
".gitattributes",
"README.md"
] | Coolhand | 0 | |||
CouchCat/ma_mlc_v7_distil | 2021-02-17T08:17:07.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"transformers",
"license:mit",
"multi-label"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | CouchCat | 7 | transformers | ---
language: en
license: mit
tags:
- multi-label
widget:
- text: "I would like to return these pants and shoes"
---
### Description
A Multi-label text classification model trained on a customer feedback data using DistilBert.
Possible labels are:
- Delivery (delivery status, time of arrival, etc.)
- Return (return confirmation, return label requests, etc.)
- Product (quality, complaint, etc.)
- Monetary (pending transactions, refund, etc.)
### Usage
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("CouchCat/ma_mlc_v7_distil")
model = AutoModelForSequenceClassification.from_pretrained("CouchCat/ma_mlc_v7_distil")
``` |
CouchCat/ma_ner_v6_distil | 2021-02-15T23:32:46.000Z | [
"pytorch",
"distilbert",
"token-classification",
"en",
"transformers",
"license:mit",
"ner"
] | token-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | CouchCat | 18 | transformers | ---
language: en
license: mit
tags:
- ner
widget:
- text: "These shoes from Adidas fit quite well"
---
### Description
A Named Entity Recognition model trained on a customer feedback data using DistilBert.
Possible labels are:
- PRD: for certain products
- BRND: for brands
### Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("CouchCat/ma_ner_v6_distil")
model = AutoModelForTokenClassification.from_pretrained("CouchCat/ma_ner_v6_distil")
``` |
CouchCat/ma_ner_v7_distil | 2021-02-28T20:54:46.000Z | [
"pytorch",
"distilbert",
"token-classification",
"en",
"transformers",
"license:mit",
"ner"
] | token-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | CouchCat | 52 | transformers | ---
language: en
license: mit
tags:
- ner
widget:
- text: "These shoes I recently bought from Tommy Hilfiger fit quite well. The shirt, however, has got a hole"
---
### Description
A Named Entity Recognition model trained on a customer feedback data using DistilBert.
Possible labels are in BIO-notation. Performance of the PERS tag could be better because of low data samples:
- PROD: for certain products
- BRND: for brands
- PERS: people names
The following tags are simply in place to help better categorize the previous tags
- MATR: relating to materials, e.g. cloth, leather, seam, etc.
- TIME: time related entities
- MISC: any other entity that might skew the results
### Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("CouchCat/ma_ner_v7_distil")
model = AutoModelForTokenClassification.from_pretrained("CouchCat/ma_ner_v7_distil")
```
|
CouchCat/ma_sa_v7_distil | 2021-02-15T23:19:57.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"transformers",
"license:mit",
"sentiment-analysis"
] | text-classification | [
".DS_Store",
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | CouchCat | 13 | transformers | ---
language: en
license: mit
tags:
- sentiment-analysis
widget:
- text: "I am disappointed in the terrible quality of my dress"
---
### Description
A Sentiment Analysis model trained on customer feedback data using DistilBert.
Possible sentiments are:
* negative
* neutral
* positive
### Usage
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("CouchCat/ma_sa_v7_distil")
model = AutoModelForSequenceClassification.from_pretrained("CouchCat/ma_sa_v7_distil")
``` |
CrayonShinchan/bart_fine_tune_test | 2021-04-13T11:03:35.000Z | [] | [
".gitattributes"
] | CrayonShinchan | 0 | |||
CrayonShinchan/fine_tune_try_1 | 2021-04-13T12:22:00.000Z | [] | [
".gitattributes"
] | CrayonShinchan | 0 | |||
CuongLD/wav2vec2-large-xlsr-vietnamese | 2021-03-26T07:03:50.000Z | [
"pytorch",
"wav2vec2",
"vi",
"dataset:common_voice, infore_25h",
"arxiv:2006.11477",
"arxiv:2006.13979",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] | automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | CuongLD | 24 | transformers | ---
language: vi
datasets:
- common_voice, infore_25h
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Cuong-Cong XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice vi
type: common_voice
args: vi
metrics:
- name: Test WER
type: wer
value: 58.63
---
# Wav2Vec2-Large-XLSR-53-Vietnamese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Vietnamese using the [Common Voice](https://huggingface.co/datasets/common_voice), [Infore_25h dataset](https://files.huylenguyen.com/25hours.zip) (Password: BroughtToYouByInfoRe)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "vi", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("CuongLD/wav2vec2-large-xlsr-vietnamese")
model = Wav2Vec2ForCTC.from_pretrained("CuongLD/wav2vec2-large-xlsr-vietnamese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Vietnamese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "vi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("CuongLD/wav2vec2-large-xlsr-vietnamese")
model = Wav2Vec2ForCTC.from_pretrained("CuongLD/wav2vec2-large-xlsr-vietnamese")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 58.63 %
## Training
The Common Voice `train`, `validation`, and `Infore_25h` datasets were used for training
The script used for training can be found [here](https://drive.google.com/file/d/1AW9R8IlsapiSGh9n3aECf23t-zhk3wUh/view?usp=sharing)
=======================To here===============================>
Your model in then available under *huggingface.co/CuongLD/wav2vec2-large-xlsr-vietnamese* for everybody to use 🎉.
## How to evaluate my trained checkpoint
Having uploaded your model, you should now evaluate your model in a final step. This should be as simple as
copying the evaluation code of your model card into a python script and running it. Make sure to note
the final result on the model card **both** under the YAML tags at the very top **and** below your evaluation code under "Test Results".
## Rules of training and evaluation
In this section, we will quickly go over what data is allowed to be used as training
data, what kind of data preprocessing is allowed be used, and how the model should be evaluated.
To make it very simple regarding the first point: **All data except the official common voice `test` data set can be used as training data**. For models trained in a language that is not included in Common Voice, the author of the model is responsible to
leave a reasonable amount of data for evaluation.
Second, the rules regarding the preprocessing are not that as straight-forward. It is allowed (and recommended) to
normalize the data to only have lower-case characters. It is also allowed (and recommended) to remove typographical
symbols and punctuation marks. A list of such symbols can *e.g.* be fonud [here](https://en.wikipedia.org/wiki/List_of_typographical_symbols_and_punctuation_marks) - however here we already must be careful. We should **not** remove a symbol that
would change the meaning of the words, *e.g.* in English, we should not remove the single quotation mark `'` since it
would change the meaning of the word `"it's"` to `"its"` which would then be incorrect. So the golden rule here is to
not remove any characters that could change the meaning of a word into another word. This is not always obvious and should
be given some consideration. As another example, it is fine to remove the "Hypen-minus" sign "`-`" since it doesn't change the
meaninng of a word to another one. *E.g.* "`fine-tuning`" would be changed to "`finetuning`" which has still the same meaning.
Since those choices are not always obvious when in doubt feel free to ask on Slack or even better post on the forum, as was
done, *e.g.* [here](https://discuss.huggingface.co/t/spanish-asr-fine-tuning-wav2vec2/4586).
## Tips and tricks
This section summarizes a couple of tips and tricks across various topics. It will continously be updated during the week.
### How to combine multiple datasets into one
Check out [this](https://discuss.huggingface.co/t/how-to-combine-local-data-files-with-an-official-dataset/4685) post.
### How to effectively preprocess the data
### How to do efficiently load datasets with limited ram and hard drive space
Check out [this](https://discuss.huggingface.co/t/german-asr-fine-tuning-wav2vec2/4558/8?u=patrickvonplaten) post.
### How to do hyperparameter tuning
### How to preprocess and evaluate character based languages
## Further reading material
It is recommended that take some time to read up on how Wav2vec2 works in theory.
Getting a better understanding of the theory and the inner mechanisms of the model often helps when fine-tuning the model.
**However**, if you don't like reading blog posts/papers, don't worry - it is by no means necessary to go through the theory to fine-tune Wav2Vec2 on your language of choice.
If you are interested in learning more about the model though, here are a couple of resources that are important to better understand Wav2Vec2:
- [Facebook's Wav2Vec2 blog post](https://ai.facebook.com/blog/wav2vec-state-of-the-art-speech-recognition-through-self-supervision/)
- [Official Wav2Vec2 paper](https://arxiv.org/abs/2006.11477)
- [Official XLSR Wav2vec2 paper](https://arxiv.org/pdf/2006.13979.pdf)
- [Hugging Face Blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2)
- [How does CTC (Connectionist Temporal Classification) work](https://distill.pub/2017/ctc/)
It helps to have a good understanding of the following points:
- How was XLSR-Wav2Vec2 pretrained? -> Feature vectors were masked and had to be predicted by the model; very similar in spirit to masked language model of BERT.
- What parts of XLSR-Wav2Vec2 are responsible for what? What is the feature extractor part used for? -> extract feature vectors from the 1D raw audio waveform; What is the transformer part doing? -> mapping feature vectors to contextualized feature vectors; ...
- What part of the model needs to be fine-tuned? -> The pretrained model **does not** include a language head to classify the contextualized features to letters. This is randomly initialized when loading the pretrained checkpoint and has to be fine-tuned. Also, note that the authors recommend to **not** further fine-tune the feature extractor.
- What data was used to XLSR-Wav2Vec2? The checkpoint we will use for further fine-tuning was pretrained on **53** languages.
- What languages are considered to be similar by XLSR-Wav2Vec2? In the official [XLSR Wav2Vec2 paper](https://arxiv.org/pdf/2006.13979.pdf), the authors show nicely which languages share a common contextualized latent space. It might be useful for you to extend your training data with data of other languages that are considered to be very similar by the model (or you).
## FAQ
- Can a participant fine-tune models for more than one language?
Yes! A participant can fine-tune models in as many languages she/he likes
- Can a participant use extra data (apart from the common voice data)?
Yes! All data except the official common voice `test data` can be used for training.
If a participant wants to train a model on a language that is not part of Common Voice (which
is very much encouraged!), the participant should make sure that some test data is held out to
make sure the model is not overfitting.
- Can we fine-tune for high-resource languages?
Yes! While we do not really recommend people to fine-tune models in English since there are
already so many fine-tuned speech recognition models in English. However, it is very much
appreciated if participants want to fine-tune models in other "high-resource" languages, such
as French, Spanish, or German. For such cases, one probably needs to train locally and apply
might have to apply tricks such as lazy data loading (check the ["Lazy data loading"](#how-to-do-lazy-data-loading) section for more details).
|
DHBaek/gpt2-stackoverflow-question-contents-generator | 2021-06-15T02:18:56.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | DHBaek | 91 | transformers | |
DHBaek/xlm-roberta-large-korquad-mask | 2021-05-15T05:07:50.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json"
] | DHBaek | 34 | transformers | |
DJSammy/bert-base-danish-uncased_BotXO,ai | 2021-05-19T11:13:30.000Z | [
"pytorch",
"jax",
"da",
"dataset:common_crawl",
"dataset:wikipedia",
"transformers",
"bert",
"masked-lm",
"license:cc-by-4.0",
"fill-mask",
"pipeline_tag:fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"vocab.txt"
] | DJSammy | 11,489 | transformers | ---
language: da
tags:
- bert
- masked-lm
license: cc-by-4.0
datasets:
- common_crawl
- wikipedia
pipeline_tag: fill-mask
widget:
- text: "København er [MASK] i Danmark."
---
# Danish BERT (uncased) model
[BotXO.ai](https://www.botxo.ai/) developed this model. For data and training details see their [GitHub repository](https://github.com/botxo/nordic_bert).
The original model was trained in TensorFlow then I converted it to Pytorch using [transformers-cli](https://huggingface.co/transformers/converting_tensorflow_models.html?highlight=cli).
For TensorFlow version download here: https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1
## Architecture
```python
from transformers import AutoModelForPreTraining
model = AutoModelForPreTraining.from_pretrained("DJSammy/bert-base-danish-uncased_BotXO,ai")
params = list(model.named_parameters())
print('danish_bert_uncased_v2 has {:} different named parameters.\n'.format(len(params)))
print('==== Embedding Layer ====\n')
for p in params[0:5]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
print('\n==== First Transformer ====\n')
for p in params[5:21]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
print('\n==== Last Transformer ====\n')
for p in params[181:197]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
print('\n==== Output Layer ====\n')
for p in params[197:]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
# danish_bert_uncased_v2 has 206 different named parameters.
# ==== Embedding Layer ====
# bert.embeddings.word_embeddings.weight (32000, 768)
# bert.embeddings.position_embeddings.weight (512, 768)
# bert.embeddings.token_type_embeddings.weight (2, 768)
# bert.embeddings.LayerNorm.weight (768,)
# bert.embeddings.LayerNorm.bias (768,)
# ==== First Transformer ====
# bert.encoder.layer.0.attention.self.query.weight (768, 768)
# bert.encoder.layer.0.attention.self.query.bias (768,)
# bert.encoder.layer.0.attention.self.key.weight (768, 768)
# bert.encoder.layer.0.attention.self.key.bias (768,)
# bert.encoder.layer.0.attention.self.value.weight (768, 768)
# bert.encoder.layer.0.attention.self.value.bias (768,)
# bert.encoder.layer.0.attention.output.dense.weight (768, 768)
# bert.encoder.layer.0.attention.output.dense.bias (768,)
# bert.encoder.layer.0.attention.output.LayerNorm.weight (768,)
# bert.encoder.layer.0.attention.output.LayerNorm.bias (768,)
# bert.encoder.layer.0.intermediate.dense.weight (3072, 768)
# bert.encoder.layer.0.intermediate.dense.bias (3072,)
# bert.encoder.layer.0.output.dense.weight (768, 3072)
# bert.encoder.layer.0.output.dense.bias (768,)
# bert.encoder.layer.0.output.LayerNorm.weight (768,)
# bert.encoder.layer.0.output.LayerNorm.bias (768,)
# ==== Last Transformer ====
# bert.encoder.layer.11.attention.self.query.weight (768, 768)
# bert.encoder.layer.11.attention.self.query.bias (768,)
# bert.encoder.layer.11.attention.self.key.weight (768, 768)
# bert.encoder.layer.11.attention.self.key.bias (768,)
# bert.encoder.layer.11.attention.self.value.weight (768, 768)
# bert.encoder.layer.11.attention.self.value.bias (768,)
# bert.encoder.layer.11.attention.output.dense.weight (768, 768)
# bert.encoder.layer.11.attention.output.dense.bias (768,)
# bert.encoder.layer.11.attention.output.LayerNorm.weight (768,)
# bert.encoder.layer.11.attention.output.LayerNorm.bias (768,)
# bert.encoder.layer.11.intermediate.dense.weight (3072, 768)
# bert.encoder.layer.11.intermediate.dense.bias (3072,)
# bert.encoder.layer.11.output.dense.weight (768, 3072)
# bert.encoder.layer.11.output.dense.bias (768,)
# bert.encoder.layer.11.output.LayerNorm.weight (768,)
# bert.encoder.layer.11.output.LayerNorm.bias (768,)
# ==== Output Layer ====
# bert.pooler.dense.weight (768, 768)
# bert.pooler.dense.bias (768,)
# cls.predictions.bias (32000,)
# cls.predictions.transform.dense.weight (768, 768)
# cls.predictions.transform.dense.bias (768,)
# cls.predictions.transform.LayerNorm.weight (768,)
# cls.predictions.transform.LayerNorm.bias (768,)
# cls.seq_relationship.weight (2, 768)
# cls.seq_relationship.bias (2,)
```
## Example Pipeline
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='DJSammy/bert-base-danish-uncased_BotXO,ai')
unmasker('København er [MASK] i Danmark.')
# Copenhagen is the [MASK] of Denmark.
# =>
# [{'score': 0.788068950176239,
# 'sequence': '[CLS] københavn er hovedstad i danmark. [SEP]',
# 'token': 12610,
# 'token_str': 'hovedstad'},
# {'score': 0.07606703042984009,
# 'sequence': '[CLS] københavn er hovedstaden i danmark. [SEP]',
# 'token': 8108,
# 'token_str': 'hovedstaden'},
# {'score': 0.04299738258123398,
# 'sequence': '[CLS] københavn er metropol i danmark. [SEP]',
# 'token': 23305,
# 'token_str': 'metropol'},
# {'score': 0.008163209073245525,
# 'sequence': '[CLS] københavn er ikke i danmark. [SEP]',
# 'token': 89,
# 'token_str': 'ikke'},
# {'score': 0.006238455418497324,
# 'sequence': '[CLS] københavn er ogsa i danmark. [SEP]',
# 'token': 25253,
# 'token_str': 'ogsa'}]
```
|
DJSammy/bert-base-swedish-uncased_BotXO,ai | 2020-10-25T03:42:06.000Z | [
"pytorch",
"transformers"
] | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"vocab.txt"
] | DJSammy | 14 | transformers | ||
DTAI-KULeuven/mbert-corona-tweets-belgium-curfew-support | 2021-05-18T18:06:20.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"multilingual",
"arxiv:2104.09947",
"transformers",
"Dutch",
"French",
"English",
"Tweets",
"Sentiment analysis"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
] | DTAI-KULeuven | 23 | transformers | ---
language: "multilingual"
tags:
- Dutch
- French
- English
- Tweets
- Sentiment analysis
widget:
- text: "I really wish I could leave my house after midnight, this makes no sense!"
---
# Measuring Shifts in Attitudes Towards COVID-19 Measures in Belgium Using Multilingual BERT
[Blog post »](https://people.cs.kuleuven.be/~pieter.delobelle/attitudes-towards-covid-19-measures/?utm_source=huggingface&utm_medium=social&utm_campaign=corona_tweets) · [paper »](http://arxiv.org/abs/2104.09947)
This model can be used to determine if a tweet expresses support or not for a curfew. The model was trained on manually labeled tweets from Belgium in Dutch, French and English.
We categorized several months worth of these Tweets by topic (government COVID measure) and opinion expressed. Below is a timeline of the relative number of Tweets on the curfew topic (middle) and the fraction of those Tweets that find the curfew too strict, too loose, or a suitable measure (bottom), with the number of daily cases in Belgium to give context on the pandemic situation (top).
![chart.png](https://github.com/iPieter/bert-corona-tweets/raw/master/chart.png)
Models used in this paper are on HuggingFace:
- https://huggingface.co/DTAI-KULeuven/mbert-corona-tweets-belgium-curfew-support
- https://huggingface.co/DTAI-KULeuven/mbert-corona-tweets-belgium-topics
|
DTAI-KULeuven/mbert-corona-tweets-belgium-topics | 2021-05-18T18:07:37.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"multilingual",
"arxiv:2104.09947",
"transformers",
"Dutch",
"French",
"English",
"Tweets",
"Topic classification"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
] | DTAI-KULeuven | 20 | transformers | ---
language: "multilingual"
tags:
- Dutch
- French
- English
- Tweets
- Topic classification
widget:
- text: "I really can't wait for this lockdown to be over and go back to waking up early."
---
# Measuring Shifts in Attitudes Towards COVID-19 Measures in Belgium Using Multilingual BERT
[Blog post »](https://people.cs.kuleuven.be/~pieter.delobelle/attitudes-towards-covid-19-measures/?utm_source=huggingface&utm_medium=social&utm_campaign=corona_tweets) · [paper »](http://arxiv.org/abs/2104.09947)
We categorized several months worth of these Tweets by topic (government COVID measure) and opinion expressed. Below is a timeline of the relative number of Tweets on the curfew topic (middle) and the fraction of those Tweets that find the curfew too strict, too loose, or a suitable measure (bottom), with the number of daily cases in Belgium to give context on the pandemic situation (top).
![chart.png](https://github.com/iPieter/bert-corona-tweets/raw/master/chart.png)
Models used in this paper are on HuggingFace:
- https://huggingface.co/DTAI-KULeuven/mbert-corona-tweets-belgium-curfew-support
- https://huggingface.co/DTAI-KULeuven/mbert-corona-tweets-belgium-topics
|
DanBot/TCRsynth | 2021-03-08T15:35:57.000Z | [] | [
".gitattributes"
] | DanBot | 0 | |||
Darein/Def | 2021-05-04T14:45:06.000Z | [] | [
".gitattributes"
] | Darein | 0 | |||
DarkWolf/kn-electra-small | 2021-04-23T05:36:19.000Z | [
"pytorch",
"electra",
"transformers"
] | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json",
"vocab.txt"
] | DarkWolf | 7 | transformers | ||
Darkecho789/email-gen | 2021-01-26T23:08:01.000Z | [] | [
".gitattributes"
] | Darkecho789 | 0 | |||
Darkrider/covidbert_medmarco | 2021-05-18T18:08:55.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:2010.05987",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | Darkrider | 84 | transformers | Fine-tuned CovidBERT on Med-Marco Dataset for passage ranking
# CovidBERT-MedNLI
This is the model **CovidBERT** trained by DeepSet on AllenAI's [CORD19 Dataset](https://pages.semanticscholar.org/coronavirus-research) of scientific articles about coronaviruses.
The model uses the original BERT wordpiece vocabulary and was subsequently fine-tuned on the [SNLI](https://nlp.stanford.edu/projects/snli/) and the [MultiNLI](https://www.nyu.edu/projects/bowman/multinli/) datasets using the [`sentence-transformers` library](https://github.com/UKPLab/sentence-transformers/) to produce universal sentence embeddings [1] using the **average pooling strategy** and a **softmax loss**.
It is further fine-tuned Med-Marco Dataset. MacAvaney et.al in their [paper](https://arxiv.org/abs/2010.05987) titled “SLEDGE-Z: A Zero-Shot Baseline for COVID-19 Literature Search” used MedSyn a lexicon of layperson and expert terminology for various medical conditions to filter for medical questions. One can also replace this by UMLs ontologies but the beauty of MedSyn is that the terms are more general human conversation lingo and not terms based on scientific literature.
Parameter details for the original training on CORD-19 are available on [DeepSet's MLFlow](https://public-mlflow.deepset.ai/#/experiments/2/runs/ba27d00c30044ef6a33b1d307b4a6cba)
**Base model**: `deepset/covid_bert_base` from HuggingFace's `AutoModel`.
|
Darkrider/covidbert_mednli | 2021-03-07T15:20:12.000Z | [
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"modules.json",
"0_Transformer/config.json",
"0_Transformer/pytorch_model.bin",
"0_Transformer/sentence_bert_config.json",
"0_Transformer/special_tokens_map.json",
"0_Transformer/tokenizer_config.json",
"0_Transformer/vocab.txt",
"1_Pooling/config.json"
] | Darkrider | 14 | transformers | # CovidBERT-MedNLI
This is the model **CovidBERT** trained by DeepSet on AllenAI's [CORD19 Dataset](https://pages.semanticscholar.org/coronavirus-research) of scientific articles about coronaviruses.
The model uses the original BERT wordpiece vocabulary and was subsequently fine-tuned on the [SNLI](https://nlp.stanford.edu/projects/snli/) and the [MultiNLI](https://www.nyu.edu/projects/bowman/multinli/) datasets using the [`sentence-transformers` library](https://github.com/UKPLab/sentence-transformers/) to produce universal sentence embeddings [1] using the **average pooling strategy** and a **softmax loss**.
It is further fine-tuned on both MedNLI datasets available at Physionet.
[ACL-BIONLP 2019](https://physionet.org/content/mednli-bionlp19/1.0.1/)
[MedNLI from MIMIC](https://physionet.org/content/mednli/1.0.0/)
Parameter details for the original training on CORD-19 are available on [DeepSet's MLFlow](https://public-mlflow.deepset.ai/#/experiments/2/runs/ba27d00c30044ef6a33b1d307b4a6cba)
**Base model**: `deepset/covid_bert_base` from HuggingFace's `AutoModel`.
|
|
Darren/darren | 2021-05-27T08:01:25.000Z | [] | [
".gitattributes",
"pytorch_model.zip"
] | Darren | 0 | |||
DarshanDeshpande/marathi-distilbert | 2021-03-23T08:20:29.000Z | [
"pytorch",
"tf",
"distilbert",
"masked-lm",
"mr",
"dataset:Oscar Corpus, News, Stories",
"arxiv:1910.01108",
"transformers",
"fill-mask",
"license:apache-2.0"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | DarshanDeshpande | 24 | transformers | ---
language:
- mr
tags:
- fill-mask
license: apache-2.0
datasets:
- Oscar Corpus, News, Stories
widget:
- text: "हा खरोखर चांगला [MASK] आहे."
---
# Marathi DistilBERT
## Model description
This model is an adaptation of DistilBERT (Victor Sanh et al., 2019) for Marathi language. This version of Marathi-DistilBERT is trained from scratch on approximately 11.2 million sentences.
```
DISCLAIMER
This model has not been thoroughly tested and may contain biased opinions or inappropriate language. User discretion is advised
```
## Training data
The training data has been extracted from a variety of sources, mainly including:
1. Oscar Corpus
2. Marathi Newspapers
3. Marathi storybooks and articles
The data is cleaned by removing all languages other than Marathi, while preserving common punctuations
## Training procedure
The model is trained from scratch using an Adam optimizer with a learning rate of 1e-4 and default β1 and β2 values of 0.9 and 0.999 respectively with a total batch size of 256 on a v3-8 TPU and mask probability of 15%.
## Example
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="DarshanDeshpande/marathi-distilbert",
tokenizer="DarshanDeshpande/marathi-distilbert",
)
fill_mask("हा खरोखर चांगला [MASK] आहे.")
```
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<h3>Authors </h3>
<h5>1. Darshan Deshpande: <a href="https://github.com/DarshanDeshpande">GitHub</a>, <a href="https://www.linkedin.com/in/darshan-deshpande/">LinkedIn</a><h5>
<h5>2. Harshavardhan Abichandani: <a href="https://github.com/Baras64">GitHub</a>, <a href="http://www.linkedin.com/in/harsh-abhi">LinkedIn</a><h5> |
Dave/twomad-model | 2021-04-14T00:38:35.000Z | [] | [
".gitattributes"
] | Dave | 0 | |||
Davlan/bert-base-multilingual-cased-finetuned-amharic | 2021-06-02T12:37:53.000Z | [
"pytorch",
"bert",
"masked-lm",
"am",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | Davlan | 13 | transformers | Hugging Face's logo
---
language: am
datasets:
---
# bert-base-multilingual-cased-finetuned-amharic
## Model description
**bert-base-multilingual-cased-finetuned-amharic** is a **Amharic BERT** model obtained by replacing mBERT vocabulary by amharic vocabulary because the language was not supported, and fine-tuning **bert-base-multilingual-cased** model on Amharic language texts. It provides **better performance** than the multilingual Amharic on named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Amharic corpus using Amharic vocabulary.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-amharic')
>>> unmasker("የአሜሪካ የአፍሪካ ቀንድ ልዩ መልዕክተኛ ጄፈሪ ፌልትማን በአራት አገራት የሚያደጉትን [MASK] መጀመራቸውን የአሜሪካ የውጪ ጉዳይ ሚንስቴር አስታወቀ።")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on [Amharic CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | am_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 0.0 | 60.89
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/bert-base-multilingual-cased-finetuned-hausa | 2021-05-20T17:54:54.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"ha",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | Davlan | 63 | transformers | Hugging Face's logo
---
language: ha
datasets:
---
# bert-base-multilingual-cased-finetuned-hausa
## Model description
**bert-base-multilingual-cased-finetuned-hausa** is a **Hausa BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Hausa language texts. It provides **better performance** than the multilingual BERT on text classification and named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Hausa corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-hausa')
>>> unmasker("Shugaban [MASK] Muhammadu Buhari ya amince da shawarar da ma’aikatar sufuri karkashin jagoranci")
[{'sequence':
'[CLS] Shugaban Nigeria Muhammadu Buhari ya amince da shawarar da ma [UNK] aikatar sufuri karkashin jagoranci [SEP]',
'score': 0.9762618541717529,
'token': 22045,
'token_str': 'Nigeria'},
{'sequence': '[CLS] Shugaban Ka Muhammadu Buhari ya amince da shawarar da ma [UNK] aikatar sufuri karkashin jagoranci [SEP]', 'score': 0.007239189930260181,
'token': 25444,
'token_str': 'Ka'},
{'sequence': '[CLS] Shugaban, Muhammadu Buhari ya amince da shawarar da ma [UNK] aikatar sufuri karkashin jagoranci [SEP]', 'score': 0.001990817254409194,
'token': 117,
'token_str': ','},
{'sequence': '[CLS] Shugaban Ghana Muhammadu Buhari ya amince da shawarar da ma [UNK] aikatar sufuri karkashin jagoranci [SEP]', 'score': 0.001566368737258017,
'token': 28682,
'token_str': 'Ghana'},
{'sequence': '[CLS] Shugabanmu Muhammadu Buhari ya amince da shawarar da ma [UNK] aikatar sufuri karkashin jagoranci [SEP]', 'score': 0.0009375187801197171,
'token': 11717,
'token_str': '##mu'}]
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on [Hausa CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | ha_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 86.65 | 91.31
[VOA Hausa Textclass](https://huggingface.co/datasets/hausa_voa_topics) | 84.76 | 90.98
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/bert-base-multilingual-cased-finetuned-igbo | 2021-06-06T14:14:06.000Z | [
"pytorch",
"bert",
"masked-lm",
"ig",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | Davlan | 6 | transformers | Hugging Face's logo
---
language: ig
datasets:
---
# bert-base-multilingual-cased-finetuned-igbo
## Model description
**bert-base-multilingual-cased-finetuned-igbo** is a **Igbo BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Igbo language texts. It provides **better performance** than the multilingual BERT on text classification and named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Igbo corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-igbo')
>>> unmasker("Reno Omokri na Gọọmentị [MASK] enweghị ihe ha ga-eji hiwe ya bụ mmachi.")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 + OPUS CC-Align + [IGBO NLP Corpus](https://github.com/IgnatiusEzeani/IGBONLP) +[Igbo CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | ig_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 85.11 | 86.75
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/bert-base-multilingual-cased-finetuned-kinyarwanda | 2021-06-15T20:11:29.000Z | [
"pytorch",
"bert",
"masked-lm",
"rw",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | Davlan | 5 | transformers | Hugging Face's logo
---
language: rw
datasets:
---
# bert-base-multilingual-cased-finetuned-kinyarwanda
## Model description
**bert-base-multilingual-cased-finetuned-kinyarwanda** is a **Kinyarwanda BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Kinyarwanda language texts. It provides **better performance** than the multilingual BERT on named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Kinyarwanda corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-kinyarwanda')
>>> unmasker("Twabonye ko igihe mu [MASK] hazaba hari ikirango abantu bakunze")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 + [KIRNEWS](https://github.com/Andrews2017/KINNEWS-and-KIRNEWS-Corpus) + [BBC Gahuza](https://www.bbc.com/gahuza)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | rw_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 72.20 | 77.57
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/bert-base-multilingual-cased-finetuned-luganda | 2021-06-17T17:43:07.000Z | [
"pytorch",
"bert",
"masked-lm",
"lg",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | Davlan | 5 | transformers | |
Davlan/bert-base-multilingual-cased-finetuned-naija | 2021-06-15T20:39:28.000Z | [
"pytorch",
"bert",
"masked-lm",
"pcm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | Davlan | 5 | transformers | Hugging Face's logo
---
language: pcm
datasets:
---
# bert-base-multilingual-cased-finetuned-naija
## Model description
**bert-base-multilingual-cased-finetuned-naija** is a **Nigerian-Pidgin BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Nigerian-Pidgin language texts. It provides **better performance** than the multilingual BERT on named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Nigerian-Pidgin corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-naija')
>>> unmasker("Another attack on ambulance happen for Koforidua in March [MASK] year where robbers kill Ambulance driver")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 + [BBC Pidgin](https://www.bbc.com/pidgin)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | pcm_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 87.23 | 89.95
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/bert-base-multilingual-cased-finetuned-swahili | 2021-05-20T18:43:07.000Z | [
"pytorch",
"bert",
"masked-lm",
"ha",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | Davlan | 23 | transformers | Hugging Face's logo
---
language: ha
datasets:
---
# bert-base-multilingual-cased-finetuned-swahili
## Model description
**bert-base-multilingual-cased-finetuned-swahili** is a **Swahili BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Swahili language texts. It provides **better performance** than the multilingual BERT on text classification and named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Swahili corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-swahili')
>>> unmasker("Jumatatu, Bwana Kagame alielezea shirika la France24 huko [MASK] kwamba "hakuna uhalifu ulitendwa")
[{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Paris kwamba hakuna uhalifu ulitendwa',
'score': 0.31642526388168335,
'token': 10728,
'token_str': 'Paris'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Rwanda kwamba hakuna uhalifu ulitendwa',
'score': 0.15753623843193054,
'token': 57557,
'token_str': 'Rwanda'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Burundi kwamba hakuna uhalifu ulitendwa',
'score': 0.07211585342884064,
'token': 57824,
'token_str': 'Burundi'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko France kwamba hakuna uhalifu ulitendwa',
'score': 0.029844321310520172,
'token': 10688,
'token_str': 'France'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Senegal kwamba hakuna uhalifu ulitendwa',
'score': 0.0265930388122797,
'token': 38052,
'token_str': 'Senegal'}]
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on [Swahili CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | sw_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 86.80 | 89.36
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/bert-base-multilingual-cased-finetuned-yoruba | 2021-05-20T20:25:30.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"yo",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | Davlan | 23 | transformers | Hugging Face's logo
---
language: yo
datasets:
---
# bert-base-multilingual-cased-finetuned-yoruba
## Model description
**bert-base-multilingual-cased-finetuned-yoruba** is a **Yoruba BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Yorùbá language texts. It provides **better performance** than the multilingual BERT on text classification and named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Yorùbá corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-yoruba')
>>> unmasker("Arẹmọ Phillip to jẹ ọkọ [MASK] Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun")
[{'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ Mary Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.1738305538892746,
'token': 12176,
'token_str': 'Mary'},
{'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ Queen Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.16382873058319092,
'token': 13704,
'token_str': 'Queen'},
{'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ ti Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.13272495567798615,
'token': 14382,
'token_str': 'ti'},
{'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ King Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.12823280692100525,
'token': 11515,
'token_str': 'King'},
{'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ Lady Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.07841219753026962,
'token': 14005,
'token_str': 'Lady'}]
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on Bible, JW300, [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt), [Yoruba Embedding corpus](https://huggingface.co/datasets/yoruba_text_c3) and [CC-Aligned](https://opus.nlpl.eu/), Wikipedia, news corpora (BBC Yoruba, VON Yoruba, Asejere, Alaroye), and other small datasets curated from friends.
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | yo_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 78.97 | 82.58
[BBC Yorùbá Textclass](https://huggingface.co/datasets/yoruba_bbc_topics) | 75.13 | 79.11
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/mT5_base_yoruba_adr | 2021-04-20T21:16:26.000Z | [
"pytorch",
"mt5",
"seq2seq",
"yo",
"dataset:JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)",
"arxiv:2003.10564",
"arxiv:2103.08647",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"model_args.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
] | Davlan | 29 | transformers | Hugging Face's logo
---
language: yo
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# mT5_base_yoruba_adr
## Model description
**mT5_base_yoruba_adr** is a **automatic diacritics restoration** model for Yorùbá language based on a fine-tuned mT5-base model. It achieves the **state-of-the-art performance** for adding the correct diacritics or tonal marks to Yorùbá texts.
Specifically, this model is a *mT5_base* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for ADR.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("")
model = AutoModelForTokenClassification.from_pretrained("")
nlp = pipeline("", model=model, tokenizer=tokenizer)
example = "Emir of Kano turban Zhang wey don spend 18 years for Nigeria"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
64.63 BLEU on [Global Voices test set](https://arxiv.org/abs/2003.10564)
70.27 BLEU on [Menyo-20k test set](https://arxiv.org/abs/2103.08647)
### BibTeX entry and citation info
By Jesujoba Alabi and David Adelani
```
```
|
Davlan/mt5_base_eng_yor_mt | 2021-05-21T10:14:10.000Z | [
"pytorch",
"mt5",
"seq2seq",
"yo",
"en",
"dataset:JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)",
"arxiv:2103.08647",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin"
] | Davlan | 36 | transformers | Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# mT5_base_eng_yor_mt
## Model description
**mT5_base_yor_eng_mt** is a **machine translation** model from English language to Yorùbá language based on a fine-tuned mT5-base model. It establishes a **strong baseline** for automatically translating texts from English to Yorùbá.
Specifically, this model is a *mT5_base* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for MT.
```python
from transformers import MT5ForConditionalGeneration, T5Tokenizer
model = MT5ForConditionalGeneration.from_pretrained("Davlan/mt5_base_eng_yor_mt")
tokenizer = T5Tokenizer.from_pretrained("google/mt5-base")
input_string = "Where are you?"
inputs = tokenizer.encode(input_string, return_tensors="pt")
generated_tokens = model.generate(inputs)
results = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
9.82 BLEU on [Menyo-20k test set](https://arxiv.org/abs/2103.08647)
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/mt5_base_yor_eng_mt | 2021-05-27T07:57:16.000Z | [
"pytorch",
"mt5",
"seq2seq",
"yo",
"en",
"dataset:JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)",
"arxiv:2103.08647",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin"
] | Davlan | 566 | transformers | Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# mT5_base_yor_eng_mt
## Model description
**mT5_base_yor_eng_mt** is a **machine translation** model from Yorùbá language to English language based on a fine-tuned mT5-base model. It establishes a **strong baseline** for automatically translating texts from Yorùbá to English.
Specifically, this model is a *mT5_base* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for MT.
```python
from transformers import MT5ForConditionalGeneration, T5Tokenizer
model = MT5ForConditionalGeneration.from_pretrained("Davlan/mt5_base_yor_eng_mt")
tokenizer = T5Tokenizer.from_pretrained("google/mt5-base")
input_string = "Akọni ajìjàgbara obìnrin tó sun àtìmalé torí owó orí"
inputs = tokenizer.encode(input_string, return_tensors="pt")
generated_tokens = model.generate(inputs)
results = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
15.57 BLEU on [Menyo-20k test set](https://arxiv.org/abs/2103.08647)
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/xlm-roberta-base-finetuned-amharic | 2021-06-05T20:37:25.000Z | [
"pytorch",
"xlm-roberta",
"masked-lm",
"am",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin"
] | Davlan | 1 | transformers | Hugging Face's logo
---
language: am
datasets:
---
# xlm-roberta-base-finetuned-amharic
## Model description
**xlm-roberta-base-finetuned-amharic** is a **Amharic RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Amharic language texts. It provides **better performance** than the XLM-RoBERTa on named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Amharic corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-hausa')
>>> unmasker("የአሜሪካ የአፍሪካ ቀንድ ልዩ መልዕክተኛ ጄፈሪ ፌልትማን በአራት አገራት የሚያደጉትን <mask> መጀመራቸውን የአሜሪካ የውጪ ጉዳይ ሚንስቴር አስታወቀ።")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on [Amharic CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | am_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 70.96 | 77.97
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/xlm-roberta-base-finetuned-hausa | 2021-05-28T14:07:31.000Z | [
"pytorch",
"xlm-roberta",
"masked-lm",
"ha",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin"
] | Davlan | 132 | transformers | Hugging Face's logo
---
language: ha
datasets:
---
# xlm-roberta-base-finetuned-hausa
## Model description
**xlm-roberta-base-finetuned-hausa** is a **Hausa RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Hausa language texts. It provides **better performance** than the XLM-RoBERTa on text classification and named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Hausa corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-hausa')
>>> unmasker("Shugaban <mask> Muhammadu Buhari ya amince da shawarar da ma’aikatar sufuri karkashin jagoranci")
[{'sequence': '<s> Shugaban kasa Muhammadu Buhari ya amince da shawarar da ma’aikatar sufuri karkashin jagoranci</s>',
'score': 0.8104371428489685,
'token': 29762,
'token_str': '▁kasa'},
{'sequence': '<s> Shugaban Najeriya Muhammadu Buhari ya amince da shawarar da ma’aikatar sufuri karkashin jagoranci</s>', 'score': 0.17371904850006104,
'token': 49173,
'token_str': '▁Najeriya'},
{'sequence': '<s> Shugaban kasar Muhammadu Buhari ya amince da shawarar da ma’aikatar sufuri karkashin jagoranci</s>', 'score': 0.006917025428265333,
'token': 21221,
'token_str': '▁kasar'},
{'sequence': '<s> Shugaban Nigeria Muhammadu Buhari ya amince da shawarar da ma’aikatar sufuri karkashin jagoranci</s>', 'score': 0.005785710643976927,
'token': 72620,
'token_str': '▁Nigeria'},
{'sequence': '<s> Shugaban Kasar Muhammadu Buhari ya amince da shawarar da ma’aikatar sufuri karkashin jagoranci</s>', 'score': 0.0010596115607768297,
'token': 170255,
'token_str': '▁Kasar'}]
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on [Hausa CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | ha_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 86.10 | 91.47
[VOA Hausa Textclass](https://huggingface.co/datasets/hausa_voa_topics) | |
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/xlm-roberta-base-finetuned-igbo | 2021-06-06T20:13:58.000Z | [
"pytorch",
"xlm-roberta",
"masked-lm",
"ig",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin"
] | Davlan | 6 | transformers | Hugging Face's logo
---
language: ig
datasets:
---
# xlm-roberta-base-finetuned-igbo
## Model description
**xlm-roberta-base-finetuned-igbo** is a **Igbo RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Hausa language texts. It provides **better performance** than the XLM-RoBERTa on named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Igbo corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-igbo')
>>> unmasker("Reno Omokri na Gọọmentị <mask> enweghị ihe ha ga-eji hiwe ya bụ mmachi.")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 + OPUS CC-Align + [IGBO NLP Corpus](https://github.com/IgnatiusEzeani/IGBONLP) +[Igbo CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | ig_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 84.51 | 87.74
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/xlm-roberta-base-finetuned-kinyarwanda | 2021-06-15T20:24:02.000Z | [
"pytorch",
"xlm-roberta",
"masked-lm",
"rw",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin"
] | Davlan | 5 | transformers | Hugging Face's logo
---
language: rw
datasets:
---
# xlm-roberta-base-finetuned-kinyarwanda
## Model description
**xlm-roberta-base-finetuned-kinyarwanda** is a **Kinyarwanda RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Kinyarwanda language texts. It provides **better performance** than the XLM-RoBERTa on named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Kinyarwanda corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-kinyarwanda')
>>> unmasker("Twabonye ko igihe mu <mask> hazaba hari ikirango abantu bakunze")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 + [KIRNEWS](https://github.com/Andrews2017/KINNEWS-and-KIRNEWS-Corpus) + [BBC Gahuza](https://www.bbc.com/gahuza)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | rw_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 73.22 | 77.76
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/xlm-roberta-base-finetuned-luganda | 2021-06-17T17:25:57.000Z | [
"pytorch",
"xlm-roberta",
"masked-lm",
"lg",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin"
] | Davlan | 5 | transformers | |
Davlan/xlm-roberta-base-finetuned-naija | 2021-06-15T21:33:37.000Z | [
"pytorch",
"xlm-roberta",
"masked-lm",
"pcm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin"
] | Davlan | 44 | transformers | Hugging Face's logo
---
language: pcm
datasets:
---
# xlm-roberta-base-finetuned-naija
## Model description
**xlm-roberta-base-finetuned-naija** is a **Nigerian Pidgin RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Nigerian Pidgin language texts. It provides **better performance** than the XLM-RoBERTa on named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Nigerian Pidgin corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-naija')
>>> unmasker("Another attack on ambulance happen for Koforidua in March <mask> year where robbers kill Ambulance driver")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 + [BBC Pidgin](https://www.bbc.com/pidgin)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | pcm_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 87.26 | 90.00
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/xlm-roberta-base-finetuned-swahili | 2021-05-28T14:12:32.000Z | [
"pytorch",
"xlm-roberta",
"masked-lm",
"sw",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin"
] | Davlan | 15 | transformers | Hugging Face's logo
---
language: sw
datasets:
---
# xlm-roberta-base-finetuned-swahili
## Model description
**xlm-roberta-base-finetuned-swahili** is a **Swahili RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Swahili language texts. It provides **better performance** than the XLM-RoBERTa on text classification and named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Swahili corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-swahili')
>>> unmasker("Jumatatu, Bwana Kagame alielezea shirika la France24 huko <mask> kwamba hakuna uhalifu ulitendwa")
[{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Ufaransa kwamba hakuna uhalifu ulitendwa',
'score': 0.5077782273292542,
'token': 190096,
'token_str': 'Ufaransa'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Paris kwamba hakuna uhalifu ulitendwa',
'score': 0.3657738268375397,
'token': 7270,
'token_str': 'Paris'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Gabon kwamba hakuna uhalifu ulitendwa',
'score': 0.01592041552066803,
'token': 176392,
'token_str': 'Gabon'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko France kwamba hakuna uhalifu ulitendwa',
'score': 0.010881908237934113,
'token': 9942,
'token_str': 'France'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Marseille kwamba hakuna uhalifu ulitendwa',
'score': 0.009554869495332241,
'token': 185918,
'token_str': 'Marseille'}]
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on [Swahili CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | sw_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 87.55 | 89.46
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/xlm-roberta-base-finetuned-yoruba | 2021-05-28T13:53:56.000Z | [
"pytorch",
"xlm-roberta",
"masked-lm",
"yo",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin"
] | Davlan | 9 | transformers | Hugging Face's logo
---
language: yo
datasets:
---
# xlm-roberta-base-finetuned-yoruba
## Model description
**xlm-roberta-base-finetuned-yoruba** is a **Yoruba RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Yorùbá language texts. It provides **better performance** than the XLM-RoBERTa on text classification and named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Yorùbá corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-yoruba')
>>> unmasker("Arẹmọ Phillip to jẹ ọkọ <mask> Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun")
[{'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ Queen Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.24844281375408173,
'token': 44109,
'token_str': '▁Queen'},
{'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ ile Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.1665010154247284,
'token': 1350,
'token_str': '▁ile'},
{'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ ti Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.07604238390922546,
'token': 1053,
'token_str': '▁ti'},
{'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ baba Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.06353845447301865,
'token': 12878,
'token_str': '▁baba'},
{'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ Oba Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.03836742788553238,
'token': 82879,
'token_str': '▁Oba'}]
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on Bible, JW300, [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt), [Yoruba Embedding corpus](https://huggingface.co/datasets/yoruba_text_c3) and [CC-Aligned](https://opus.nlpl.eu/), Wikipedia, news corpora (BBC Yoruba, VON Yoruba, Asejere, Alaroye), and other small datasets curated from friends.
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | yo_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 77.58 | 83.66
[BBC Yorùbá Textclass](https://huggingface.co/datasets/yoruba_bbc_topics) | |
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/xlm-roberta-large-masakhaner | 2021-04-05T17:43:25.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"amh",
"hau",
"ibo",
"kin",
"lug",
"luo",
"pcm",
"swa",
"wol",
"yor",
"multilingual",
"dataset:masakhaner",
"arxiv:2103.11811",
"transformers"
] | token-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin"
] | Davlan | 250 | transformers | Hugging Face's logo
---
language:
- amh
- hau
- ibo
- kin
- lug
- luo
- pcm
- swa
- wol
- yor
- multilingual
datasets:
- masakhaner
---
# xlm-roberta-large-masakhaner
## Model description
**xlm-roberta-large-masakhaner** is the first **Named Entity Recognition** model for 10 African languages (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahilu, Wolof, and Yorùbá) based on a fine-tuned XLM-RoBERTa large model. It achieves the **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organizations (ORG), and person (PER).
Specifically, this model is a *xlm-roberta-large* model that was fine-tuned on an aggregation of African language datasets obtained from Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Davlan/xlm-roberta-large-masakhaner")
model = AutoModelForTokenClassification.from_pretrained("Davlan/xlm-roberta-large-masakhaner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Emir of Kano turban Zhang wey don spend 18 years for Nigeria"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on 10 African NER datasets (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahilu, Wolof, and Yorùbá) Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
## Training procedure
This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original MasakhaNER paper](https://arxiv.org/abs/2103.11811) which trained & evaluated the model on MasakhaNER corpus.
## Eval results on Test set (F-score)
language|F1-score
-|-
amh |75.76
hau |91.75
ibo |86.26
kin |76.38
lug |84.64
luo |80.65
pcm |89.55
swa |89.48
wol |70.70
yor |82.05
### BibTeX entry and citation info
```
@misc{adelani2021masakhaner,
title={MasakhaNER: Named Entity Recognition for African Languages},
author={David Ifeoluwa Adelani and Jade Abbott and Graham Neubig and Daniel D'souza and Julia Kreutzer and Constantine Lignos and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and Israel Abebe Azime and Shamsuddeen Muhammad and Chris Chinenye Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and Jesujoba Alabi and Seid Muhie Yimam and Tajuddeen Gwadabe and Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and Verrah Otiende and Iroro Orife and Davis David and Samba Ngom and Tosin Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and Chiamaka Chukwuneke and Nkiruka Odu and Eric Peter Wairagala and Samuel Oyerinde and Clemencia Siro and Tobius Saul Bateesa and Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and Ayodele Awokoya and Mouhamadane MBOUP and Dibora Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and Thierno Ibrahima DIOP and Abdoulaye Diallo and Adewale Akinfaderin and Tendai Marengereke and Salomey Osei},
year={2021},
eprint={2103.11811},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
DeBERTa/deberta-v2-xxlarge | 2021-02-11T00:07:10.000Z | [] | [
".gitattributes"
] | DeBERTa | 0 | |||
DeepChem/SmilesTokenizer_PubChem_1M | 2021-05-31T20:54:05.000Z | [
"pytorch",
"roberta",
"transformers"
] | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
] | DeepChem | 3 | transformers | RoBERTa model trained on 1M SMILES from PubChem 77M set in MoleculeNet. Uses Smiles-Tokenizer |
|
DeepESP/gpt2-spanish | 2021-06-08T13:59:22.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
] | DeepESP | 219 | transformers | # GPT2-Spanish
GPT2-Spanish is a language generation model trained from scratch with 11.5GB of Spanish texts and with a Byte Pair Encoding (BPE) tokenizer that was trained for this purpose. The parameters used are the same as the small version of the original OpenAI GPT2 model.
## Corpus
This model was trained with a corpus of 11.5GB of texts corresponding to 3.5GB of Wikipedia articles and 8GB of books (narrative, short stories, theater, poetry, essays, and popularization).
## Tokenizer
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for Unicode characters) and a vocabulary size of 50257. The inputs are sequences of 1024 consecutive tokens.
This tokenizer was trained from scratch with the Spanish corpus, since it was evidenced that the tokenizer of the English models presented limitations to capture the semantic relations of Spanish, due to the morphosyntactic differences between both languages.
Apart from the special token "<|endoftext|>" for text ending in the OpenAI GPT-2 models, the tokens "<|talk|>", "<|ax1|>", "<|ax2|>" (..)"<|ax9|>" were included so that they can serve as prompts in future training.
## Training
The model and tokenizer were trained using the Hugging Face libraries with an Nvidia Tesla V100 GPU with 16GB memory on Google Colab servers.
## Authors
The authors of this model have been anonymized because they are currently being evaluated for publication in INLG2021.
Thanks to the members of the community who collaborated with funding for the initial tests.
## Cautions
The model generates texts according to the patterns learned in the training corpus. These data were not filtered, therefore, the model could generate offensive or discriminatory content.
|
DeepPavlov/bert-base-bg-cs-pl-ru-cased | 2021-05-18T18:14:05.000Z | [
"pytorch",
"jax",
"bert",
"bg",
"cs",
"pl",
"ru",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | DeepPavlov | 580 | transformers | ---
language:
- bg
- cs
- pl
- ru
---
# bert-base-bg-cs-pl-ru-cased
SlavicBERT\[1\] \(Slavic \(bg, cs, pl, ru\), cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\) was trained on Russian News and four Wikipedias: Bulgarian, Czech, Polish, and Russian. Subtoken vocabulary was built using this data. Multilingual BERT was used as an initialization for SlavicBERT.
\[1\]: Arkhipov M., Trofimova M., Kuratov Y., Sorokin A. \(2019\). [Tuning Multilingual Transformers for Language-Specific Named Entity Recognition](https://www.aclweb.org/anthology/W19-3712/). ACL anthology W19-3712.
|
|
DeepPavlov/bert-base-cased-conversational | 2021-05-18T18:15:12.000Z | [
"pytorch",
"jax",
"bert",
"en",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | DeepPavlov | 1,183 | transformers | ---
language: en
---
# bert-base-cased-conversational
Conversational BERT \(English, cased, 12‑layer, 768‑hidden, 12‑heads, 110M parameters\) was trained on the English part of Twitter, Reddit, DailyDialogues\[1\], OpenSubtitles\[2\], Debates\[3\], Blogs\[4\], Facebook News Comments. We used this training data to build the vocabulary of English subtokens and took English cased version of BERT‑base as an initialization for English Conversational BERT.
\[1\]: Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset. IJCNLP 2017.
\[2\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\)
\[3\]: Justine Zhang, Ravi Kumar, Sujith Ravi, Cristian Danescu-Niculescu-Mizil. Proceedings of NAACL, 2016.
\[4\]: J. Schler, M. Koppel, S. Argamon and J. Pennebaker \(2006\). Effects of Age and Gender on Blogging in Proceedings of 2006 AAAI Spring Symposium on Computational Approaches for Analyzing Weblogs.
|
|
DeepPavlov/bert-base-multilingual-cased-sentence | 2021-05-18T18:16:12.000Z | [
"pytorch",
"jax",
"bert",
"multilingual",
"arxiv:1704.05426",
"arxiv:1809.05053",
"arxiv:1908.10084",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | DeepPavlov | 219 | transformers | ---
language:
- multilingual
---
# bert-base-multilingual-cased-sentence
Sentence Multilingual BERT \(101 languages, cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\) is a representation‑based sentence encoder for 101 languages of Multilingual BERT. It is initialized with Multilingual BERT and then fine‑tuned on english MultiNLI\[1\] and on dev set of multilingual XNLI\[2\]. Sentence representations are mean pooled token embeddings in the same manner as in Sentence‑BERT\[3\].
\[1\]: Williams A., Nangia N. & Bowman S. \(2017\) A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. arXiv preprint [arXiv:1704.05426](https://arxiv.org/abs/1704.05426)
\[2\]: Williams A., Bowman S. \(2018\) XNLI: Evaluating Cross-lingual Sentence Representations. arXiv preprint [arXiv:1809.05053](https://arxiv.org/abs/1809.05053)
\[3\]: N. Reimers, I. Gurevych \(2019\) Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. arXiv preprint [arXiv:1908.10084](https://arxiv.org/abs/1908.10084)
|
|
DeepPavlov/rubert-base-cased-conversational | 2021-05-18T18:17:23.000Z | [
"pytorch",
"jax",
"bert",
"ru",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | DeepPavlov | 4,145 | transformers | ---
language:
- ru
---
# rubert-base-cased-conversational
Conversational RuBERT \(Russian, cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\) was trained on OpenSubtitles\[1\], [Dirty](https://d3.ru/), [Pikabu](https://pikabu.ru/), and a Social Media segment of Taiga corpus\[2\]. We assembled a new vocabulary for Conversational RuBERT model on this data and initialized the model with [RuBERT](../rubert-base-cased).
\[1\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\)
\[2\]: Shavrina T., Shapovalova O. \(2017\) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017.
|
|
DeepPavlov/rubert-base-cased-sentence | 2021-05-18T18:18:43.000Z | [
"pytorch",
"jax",
"bert",
"ru",
"arxiv:1508.05326",
"arxiv:1809.05053",
"arxiv:1908.10084",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | DeepPavlov | 208,288 | transformers | ---
language:
- ru
---
# rubert-base-cased-sentence
Sentence RuBERT \(Russian, cased, 12-layer, 768-hidden, 12-heads, 180M parameters\) is a representation‑based sentence encoder for Russian. It is initialized with RuBERT and fine‑tuned on SNLI\[1\] google-translated to russian and on russian part of XNLI dev set\[2\]. Sentence representations are mean pooled token embeddings in the same manner as in Sentence‑BERT\[3\].
\[1\]: S. R. Bowman, G. Angeli, C. Potts, and C. D. Manning. \(2015\) A large annotated corpus for learning natural language inference. arXiv preprint [arXiv:1508.05326](https://arxiv.org/abs/1508.05326)
\[2\]: Williams A., Bowman S. \(2018\) XNLI: Evaluating Cross-lingual Sentence Representations. arXiv preprint [arXiv:1809.05053](https://arxiv.org/abs/1809.05053)
\[3\]: N. Reimers, I. Gurevych \(2019\) Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. arXiv preprint [arXiv:1908.10084](https://arxiv.org/abs/1908.10084)
|
|
DeepPavlov/rubert-base-cased | 2021-05-18T18:19:58.000Z | [
"pytorch",
"jax",
"bert",
"ru",
"arxiv:1905.07213",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | DeepPavlov | 18,611 | transformers | ---
language:
- ru
---
# rubert-base-cased
RuBERT \(Russian, cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\) was trained on the Russian part of Wikipedia and news data. We used this training data to build a vocabulary of Russian subtokens and took a multilingual version of BERT‑base as an initialization for RuBERT\[1\].
\[1\]: Kuratov, Y., Arkhipov, M. \(2019\). Adaptation of Deep Bidirectional Multilingual Transformers for Russian Language. arXiv preprint [arXiv:1905.07213](https://arxiv.org/abs/1905.07213).
|
|
DeividasM/wav2vec2-large-xlsr-53-lithuanian | 2021-03-29T18:04:15.000Z | [
"pytorch",
"wav2vec2",
"lt",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] | automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | DeividasM | 101 | transformers | ---
language: lt
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Lithuanina by Deividas Mataciunas
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice lt
type: common_voice
args: lt
metrics:
- name: Test WER
type: wer
value: 56.55
---
# Wav2Vec2-Large-XLSR-53-Lithuanian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Lithuanian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "lt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("DeividasM/wav2vec2-large-xlsr-53-lithuanian")
model = Wav2Vec2ForCTC.from_pretrained("DeividasM/wav2vec2-large-xlsr-53-lithuanian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Lithuanian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "lt", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("DeividasM/wav2vec2-large-xlsr-53-lithuanian")
model = Wav2Vec2ForCTC.from_pretrained("DeividasM/wav2vec2-large-xlsr-53-lithuanian")
model.to("cuda")
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\twith torch.no_grad():
\\t\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 56.55 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
|
DemangeJeremy/4-sentiments-with-flaubert | 2021-03-29T00:03:14.000Z | [
"pytorch",
"flaubert",
"text-classification",
"fr",
"transformers",
"sentiments",
"french",
"flaubert-large"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin"
] | DemangeJeremy | 382 | transformers | ---
language: fr
tags:
- sentiments
- text-classification
- flaubert
- french
- flaubert-large
---
# Modèle de détection de 4 sentiments avec FlauBERT (mixed, negative, objective, positive)
Les travaux sont actuellement en cours. Je modifierai le modèle ces prochains jours.
### Comment l'utiliser ?
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import pipeline
loaded_tokenizer = AutoTokenizer.from_pretrained('flaubert/flaubert_large_cased')
loaded_model = AutoModelForSequenceClassification.from_pretrained("DemangeJeremy/4-sentiments-with-flaubert")
nlp = pipeline('sentiment-analysis', model=loaded_model, tokenizer=loaded_tokenizer)
print(nlp("Je suis plutôt confiant."))
```
```
[{'label': 'OBJECTIVE', 'score': 0.3320835530757904}]
```
## Résultats de l'évaluation du modèle
| Epoch | Validation Loss | Samples Per Second |
|:------:|:--------------:|:------------------:|
| 1 | 2.219246 | 49.476000 |
| 2 | 1.883753 | 47.259000 |
| 3 | 1.747969 | 44.957000 |
| 4 | 1.695606 | 43.872000 |
| 5 | 1.641470 | 45.726000 |
## Citation
Pour toute utilisation de ce modèle, merci d'utiliser cette citation :
> Jérémy Demange, Four sentiments with FlauBERT, (2021), Hugging Face repository, <https://huggingface.co/DemangeJeremy/4-sentiments-with-flaubert>
|
Deniskin/emailer_medium_300 | 2021-06-12T14:29:52.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | Deniskin | 22 | transformers | |
Deniskin/essays_small_2000 | 2021-04-21T17:01:46.000Z | [] | [
".gitattributes"
] | Deniskin | 0 | |||
Deniskin/essays_small_2000i | 2021-04-21T18:16:43.000Z | [] | [
".gitattributes"
] | Deniskin | 0 | |||
Deniskin/gpt3_medium | 2021-05-21T09:41:39.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"vocab.json"
] | Deniskin | 132 | transformers | |
Dev-DGT/food-dbert-multiling | 2021-06-18T21:55:58.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers"
] | token-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | Dev-DGT | 0 | transformers | |
Devmapall/paraphrase-quora | 2021-02-16T21:05:11.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] | Devmapall | 23 | transformers | |
DewiBrynJones/wav2vec2-large-xlsr-welsh | 2021-04-08T20:36:38.000Z | [
"pytorch",
"wav2vec2",
"cy",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] | automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | DewiBrynJones | 47 | transformers | ---
language: cy
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Wav2Vec2-XLSR-53-Welsh (Bangor University)
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice cy
type: common_voice
args: cy
metrics:
- name: Test WER
type: wer
value: 25.31
---
# Wav2Vec2-Large-XLSR-Welsh
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the [Welsh Common Voice dataset](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "cy", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("DewiBrynJones/wav2vec2-large-xlsr-welsh")
model = Wav2Vec2ForCTC.from_pretrained("DewiBrynJones/wav2vec2-large-xlsr-welsh")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Welsh test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "cy", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("DewiBrynJones/wav2vec2-large-xlsr-welsh")
model = Wav2Vec2ForCTC.from_pretrained("DewiBrynJones/wav2vec2-large-xlsr-welsh")
model.to("cuda")
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\u2013\\\\u2014\\\\;\\\\:\\\\"\\\\\\\\%\\\\\\\\\\\\]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 25.31%
# Training
A Docker based setup for training and evaluating this model can be found at GitHub: https://github.com/techiaith/xlsr-fine-tuning-week
# Example Predictions
| Prediction | Reference |
|---|---|
| rhedais i ffwrdd heb ddweud dim wrthi ym beth digwyddodd | Rhedais i ffwrdd heb ddweud dim wrthi am beth ddigwyddodd. |
| ac yr oedd y ferch yn ofnus d | Ac yr oedd y ferch yn ofnus. |
|
Dhruva/Interstellar | 2021-05-12T19:55:29.000Z | [] | [
".gitattributes"
] | Dhruva | 0 | |||
DimaOrekhov/cubert-method-name | 2020-12-28T00:30:11.000Z | [
"pytorch",
"encoder-decoder",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin"
] | DimaOrekhov | 17 | transformers | |
DimaOrekhov/transformer-method-name | 2020-12-28T00:39:31.000Z | [
"pytorch",
"encoder-decoder",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin"
] | DimaOrekhov | 11 | transformers | |
Donghyun/L2_BERT | 2021-06-16T07:06:01.000Z | [] | [
".gitattributes"
] | Donghyun | 0 | |||
Dongjae/mrc2reader | 2021-05-21T13:25:57.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json"
] | Dongjae | 153 | transformers | The Reader model is for Korean Question Answering
The backbone model is deepset/xlm-roberta-large-squad2.
It is a finetuned model with KorQuAD-v1 dataset.
As a result of verification using KorQuAD evaluation dataset, it showed approximately 87% and 92% respectively for the EM score and F1 score.
Thank you |
DrMatters/rubert_cased | 2021-05-19T11:14:32.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
] | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"vocab.txt"
] | DrMatters | 62 | transformers | ||
EMBEDDIA/crosloengual-bert | 2021-05-18T18:21:38.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"hr",
"sl",
"en",
"multilingual",
"arxiv:2006.07890",
"transformers",
"license:cc-by-4.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
] | EMBEDDIA | 477 | transformers | ---
language:
- hr
- sl
- en
- multilingual
license: cc-by-4.0
---
# CroSloEngual BERT
CroSloEngual BERT is a trilingual model, using bert-base architecture, trained on Croatian, Slovenian, and English corpora. Focusing on three languages, the model performs better than [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased), while still offering an option for cross-lingual knowledge transfer, which a monolingual model wouldn't.
Evaluation is presented in our article:
```
@Inproceedings{ulcar-robnik2020finest,
author = "Ulčar, M. and Robnik-Šikonja, M.",
year = 2020,
title = "{FinEst BERT} and {CroSloEngual BERT}: less is more in multilingual models",
editor = "Sojka, P and Kopeček, I and Pala, K and Horák, A",
booktitle = "Text, Speech, and Dialogue {TSD 2020}",
series = "Lecture Notes in Computer Science",
volume = 12284,
publisher = "Springer",
url = "https://doi.org/10.1007/978-3-030-58323-1_11",
}
```
The preprint is available at [arxiv.org/abs/2006.07890](https://arxiv.org/abs/2006.07890). |
EMBEDDIA/est-roberta | 2021-03-30T12:34:53.000Z | [
"pytorch",
"camembert",
"masked-lm",
"et",
"transformers",
"license:cc-by-sa-4.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"dict.txt",
"pytorch_model.bin",
"sentencepiece.bpe.model"
] | EMBEDDIA | 32 | transformers | ---
language:
- et
license: cc-by-sa-4.0
---
# Usage
Load in transformers library with:
```
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("EMBEDDIA/est-roberta", use_fast=False)
model = AutoModelForMaskedLM.from_pretrained("EMBEDDIA/est-roberta")
```
**NOTE**: it is currently *critically important* to add `use_fast=False` parameter to tokenizer if using transformers version 4+ (prior versions have `use_fast=False` as default) By default it attempts to load a fast tokenizer, which might work (ie. not result in an error), but not correctly, as there is no current support for fast tokenizers for Camembert-based models.
# Est-RoBERTa
Est-RoBERTa model is a monolingual Estonian BERT-like model. It is closely related to French Camembert model https://camembert-model.fr/. The Estonian corpora used for training the model have 2.51 billion tokens in total. The subword vocabulary contains 40,000 tokens.
Est-RoBERTa was trained for 40 epochs.
|
EMBEDDIA/finest-bert | 2021-05-18T18:22:50.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"fi",
"et",
"en",
"multilingual",
"arxiv:2006.07890",
"transformers",
"license:cc-by-4.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
] | EMBEDDIA | 122 | transformers | ---
language:
- fi
- et
- en
- multilingual
license: cc-by-4.0
---
# FinEst BERT
FinEst BERT is a trilingual model, using bert-base architecture, trained on Finnish, Estonian, and English corpora. Focusing on three languages, the model performs better than [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased), while still offering an option for cross-lingual knowledge transfer, which a monolingual model wouldn't.
Evaluation is presented in our article:
```
@Inproceedings{ulcar-robnik2020finest,
author = "Ulčar, M. and Robnik-Šikonja, M.",
year = 2020,
title = "{FinEst BERT} and {CroSloEngual BERT}: less is more in multilingual models",
editor = "Sojka, P and Kopeček, I and Pala, K and Horák, A",
booktitle = "Text, Speech, and Dialogue {TSD 2020}",
series = "Lecture Notes in Computer Science",
volume = 12284,
publisher = "Springer",
url = "https://doi.org/10.1007/978-3-030-58323-1_11",
}
```
The preprint is available at [arxiv.org/abs/2006.07890](https://arxiv.org/abs/2006.07890). |
EMBEDDIA/litlat-bert | 2021-03-05T14:11:15.000Z | [
"pytorch",
"xlm-roberta",
"masked-lm",
"lt",
"lv",
"en",
"multilingual",
"transformers",
"license:cc-by-sa-4.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"dict.txt",
"pytorch_model.bin",
"sentencepiece.bpe.model"
] | EMBEDDIA | 13 | transformers | ---
language:
- lt
- lv
- en
- multilingual
license: cc-by-sa-4.0
---
# LitLat BERT
LitLat BERT is a trilingual model, using xlm-roberta-base architecture, trained on Lithuanian, Latvian, and English corpora. Focusing on three languages, the model performs better than [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased), while still offering an option for cross-lingual knowledge transfer, which a monolingual model wouldn't.
### Named entity recognition evaluation
We compare LitLat BERT with multilingual BERT (mBERT), XLM-RoBERTa (XLM-R) and monolingual Latvian BERT (LVBERT) (Znotins and Barzdins, 2020). The report the results as a macro F1 score of 3 named entity classes shared in all three datasets: person, location, organization.
Language | mBERT | XLM-R | LVBERT | LitLat
---|---|---|---|---
Latvian | 0.830 | 0.865 | 0.797 | **0.881**
Lithuanian | 0.797 | 0.817 | / | **0.850**
English | 0.939 | 0.937 | / | **0.943**
|
EMBEDDIA/sloberta | 2021-03-30T12:24:45.000Z | [
"pytorch",
"camembert",
"masked-lm",
"sl",
"transformers",
"license:cc-by-sa-4.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"dict.txt",
"pytorch_model.bin",
"sentencepiece.bpe.model"
] | EMBEDDIA | 608 | transformers | ---
language:
- sl
license: cc-by-sa-4.0
---
# Usage
Load in transformers library with:
```
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("EMBEDDIA/sloberta", use_fast=False)
model = AutoModelForMaskedLM.from_pretrained("EMBEDDIA/sloberta")
```
**NOTE**: it is currently *critically important* to add `use_fast=False` parameter to tokenizer if using transformers version 4+ (prior versions have `use_fast=False` as default) By default it attempts to load a fast tokenizer, which will work (ie. not result in an error), but it will not correctly map tokens to its IDs and the performance on any task will be extremely bad.
# SloBERTa
SloBERTa model is a monolingual Slovene BERT-like model. It is closely related to French Camembert model https://camembert-model.fr/. The corpora used for training the model have 3.47 billion tokens in total. The subword vocabulary contains 32,000 tokens. The scripts and programs used for data preparation and training the model are available on https://github.com/clarinsi/Slovene-BERT-Tool
SloBERTa was trained for 200,000 iterations or about 98 epochs.
## Corpora
The following corpora were used for training the model:
* Gigafida 2.0
* Kas 1.0
* Janes 1.0 (only Janes-news, Janes-forum, Janes-blog, Janes-wiki subcorpora)
* Slovenian parliamentary corpus siParl 2.0
* slWaC
|
EMBO/bio-lm | 2021-05-20T11:43:05.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"english",
"dataset:EMBO/biolang",
"transformers",
"language model",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"training_args.bin"
] | EMBO | 12 | transformers | ---
language:
- english
thumbnail:
tags:
- language model
license:
datasets:
- EMBO/biolang
metrics:
-
---
# bio-lm
## Model description
This model is a [RoBERTa base pre-trained model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang).
## Intended uses & limitations
#### How to use
The intended use of this model is to be fine-tuned for downstream tasks, token classification in particular.
To have a quick check of the model as-is in a fill-mask task:
```python
from transformers import pipeline, RobertaTokenizerFast
tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512)
text = "Let us try this model to see if it <mask>."
fill_mask = pipeline(
"fill-mask",
model='EMBO/bio-lm',
tokenizer=tokenizer
)
fill_mask(text)
```
#### Limitations and bias
This model should be fine-tuned on a specifi task like token classification.
The model must be used with the `roberta-base` tokenizer.
## Training data
The model was trained with a masked language modeling taskon the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang) wich includes 12Mio examples from abstracts and figure legends extracted from papers published in life sciences.
## Training procedure
The training was run on a NVIDIA DGX Station with 4XTesla V100 GPUs.
Training code is available at https://github.com/source-data/soda-roberta
- Command: `python -m lm.train /data/json/oapmc_abstracts_figs/ MLM`
- Tokenizer vocab size: 50265
- Training data: EMBO/biolang MLM
- Training with: 12005390 examples
- Evaluating on: 36713 examples
- Epochs: 3.0
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- tensorboard run: lm-MLM-2021-01-27T15-17-43.113766
End of training:
```
trainset: 'loss': 0.8653350830078125
validation set: 'eval_loss': 0.8192330598831177, 'eval_recall': 0.8154601116513597
```
## Eval results
Eval on test set:
```
recall: 0.814471959728645
```
|
EMBO/sd-ner | 2021-05-20T11:44:10.000Z | [
"pytorch",
"jax",
"roberta",
"token-classification",
"english",
"dataset:EMBO/sd-nlp",
"transformers",
"token classification"
] | token-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"training_args.bin"
] | EMBO | 17 | transformers | ---
language:
- english
thumbnail:
tags:
- token classification
license:
datasets:
- EMBO/sd-nlp
metrics:
-
---
# sd-ner
## Model description
This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang). It was then fine-tuned for token classification on the SourceData [sd-nlp](https://huggingface.co/datasets/EMBO/sd-nlp) dataset wit the `NER` task to perform Named Entity Recognition of bioentities.
## Intended uses & limitations
#### How to use
The intended use of this model is for Named Entity Recognition of biological entitie used in SourceData annotations (https://sourcedata.embo.org), including small molecules, gene products (genes and proteins), subcellular components, cell line and cell types, organ and tissues, species as well as experimental methods.
To have a quick check of the model:
```python
from transformers import pipeline, RobertaTokenizerFast, RobertaForTokenClassification
example = """<s> F. Western blot of input and eluates of Upf1 domains purification in a Nmd4-HA strain. The band with the # might corresponds to a dimer of Upf1-CH, bands marked with a star correspond to residual signal with the anti-HA antibodies (Nmd4). Fragments in the eluate have a smaller size because the protein A part of the tag was removed by digestion with the TEV protease. G6PDH served as a loading control in the input samples </s>"""
tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512)
model = RobertaForTokenClassification.from_pretrained('EMBO/sd-ner')
ner = pipeline('ner', model, tokenizer=tokenizer)
res = ner(example)
for r in res:
print(r['word'], r['entity'])
```
#### Limitations and bias
The model must be used with the `roberta-base` tokenizer.
## Training data
The model was trained for token classification using the [EMBO/sd-nlp `NER`](https://huggingface.co/datasets/EMBO/sd-nlp) dataset wich includes manually annotated examples.
## Training procedure
The training was run on a NVIDIA DGX Station with 4XTesla V100 GPUs.
Training code is available at https://github.com/source-data/soda-roberta
- Command: `python -m tokcl.train NER --num_train_epochs=3.5`
- Tokenizer vocab size: 50265
- Training data: EMBO/sd-nlp NER
- Training with 31410 examples.
- Evaluating on 8861 examples.
- Training on 15 features: O, I-SMALL_MOLECULE, B-SMALL_MOLECULE, I-GENEPROD, B-GENEPROD, I-SUBCELLULAR, B-SUBCELLULAR, I-CELL, B-CELL, I-TISSUE, B-TISSUE, I-ORGANISM, B-ORGANISM, I-EXP_ASSAY, B-EXP_ASSAY
- Epochs: 3.5
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `learning_rate`: 0.0001
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
## Eval results
On test set with `sklearn.metrics`:
```
precision recall f1-score support
CELL 0.77 0.81 0.79 3477
EXP_ASSAY 0.71 0.70 0.71 7049
GENEPROD 0.86 0.90 0.88 16140
ORGANISM 0.80 0.82 0.81 2759
SMALL_MOLECULE 0.78 0.82 0.80 4446
SUBCELLULAR 0.71 0.75 0.73 2125
TISSUE 0.70 0.75 0.73 1971
micro avg 0.79 0.82 0.81 37967
macro avg 0.76 0.79 0.78 37967
weighted avg 0.79 0.82 0.81 37967
```
|
EMBO/sd-panels | 2021-05-20T11:45:27.000Z | [
"pytorch",
"jax",
"roberta",
"token-classification",
"transformers"
] | token-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"training_args.bin"
] | EMBO | 8 | transformers | |
EMBO/sd-roles | 2021-05-20T11:47:02.000Z | [
"pytorch",
"jax",
"roberta",
"token-classification",
"english",
"dataset:EMBO/sd-panels",
"transformers",
"token classification"
] | token-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"training_args.bin"
] | EMBO | 8 | transformers | ---
language:
- english
thumbnail:
tags:
- token classification
license:
datasets:
- EMBO/sd-panels
metrics:
-
---
# sd-ner
## Model description
This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang). It as then fine-tuned for token classification on the SourceData [sd-nlp](https://huggingface.co/datasets/EMBO/sd-nlp) dataset with the `ROLES` task to perform pure context-dependent semantic role classification of bioentities.
## Intended uses & limitations
#### How to use
The intended use of this model is to infer the semantic role of gene products (genes and proteins) with regard to the causal hypotheses tested in experiments reported in scientific papers.
To have a quick check of the model:
```python
from transformers import pipeline, RobertaTokenizerFast, RobertaForTokenClassification
example = """<s>The <mask> overexpression in cells caused an increase in <mask> expression.</s>"""
tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512)
model = RobertaForTokenClassification.from_pretrained('EMBO/sd-roles')
ner = pipeline('ner', model, tokenizer=tokenizer)
res = ner(example)
for r in res:
print(r['word'], r['entity'])
```
#### Limitations and bias
The model must be used with the `roberta-base` tokenizer.
## Training data
The model was trained for token classification using the [EMBO/sd-panels dataset](https://huggingface.co/datasets/EMBO/sd-panels) wich includes manually annotated examples.
## Training procedure
The training was run on a NVIDIA DGX Station with 4XTesla V100 GPUs.
Training code is available at https://github.com/source-data/soda-roberta
- Command: `python -m tokcl.train /data/json/sd_panels NER --num_train_epochs=3.5`
- Tokenizer vocab size: 50265
- Training data: EMBO/biolang MLM
- Training with 31410 examples.
- Evaluating on 8861 examples.
- Training on 15 features: O, I-SMALL_MOLECULE, B-SMALL_MOLECULE, I-GENEPROD, B-GENEPROD, I-SUBCELLULAR, B-SUBCELLULAR, I-CELL, B-CELL, I-TISSUE, B-TISSUE, I-ORGANISM, B-ORGANISM, I-EXP_ASSAY, B-EXP_ASSAY
- Epochs: 3.5
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `learning_rate`: 0.0001
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
## Eval results
On test set with `sklearn.metrics`:
```
precision recall f1-score support
CELL 0.77 0.81 0.79 3477
EXP_ASSAY 0.71 0.70 0.71 7049
GENEPROD 0.86 0.90 0.88 16140
ORGANISM 0.80 0.82 0.81 2759
SMALL_MOLECULE 0.78 0.82 0.80 4446
SUBCELLULAR 0.71 0.75 0.73 2125
TISSUE 0.70 0.75 0.73 1971
micro avg 0.79 0.82 0.81 37967
macro avg 0.76 0.79 0.78 37967
weighted avg 0.79 0.82 0.81 37967
```
|
Easton/w2v-ctc_callhome | 2021-03-21T12:11:28.000Z | [] | [
".gitattributes",
"w2v-ctc-small_callhome_ma.pt"
] | Easton | 0 | |||
Ebtihal/AraBERTo | 2021-05-11T23:26:57.000Z | [
"pytorch",
"tf",
"bert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | Ebtihal | 7 | transformers | Arabic Language Model |
Ed/Test | 2021-06-12T01:04:59.000Z | [] | [
".gitattributes",
"README.md"
] | Ed | 0 | |||
EhsanAghazadeh/bert-large-uncased-CoLA_A | 2021-05-18T18:26:14.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.txt"
] | EhsanAghazadeh | 8 | transformers | |
EhsanAghazadeh/bert-large-uncased-CoLA_B | 2021-05-18T18:29:53.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.txt"
] | EhsanAghazadeh | 7 | transformers | |
EhsanAghazadeh/xlnet-large-cased-CoLA_A | 2021-04-19T10:05:16.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin"
] | EhsanAghazadeh | 15 | transformers | |
EhsanAghazadeh/xlnet-large-cased-CoLA_B | 2021-04-19T10:59:46.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin"
] | EhsanAghazadeh | 8 | transformers | |
EhsanAghazadeh/xlnet-large-cased-CoLA_C | 2021-04-18T18:42:36.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin"
] | EhsanAghazadeh | 12 | transformers |