modelId
stringlengths 4
112
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 21
values | files
sequence | publishedBy
stringlengths 2
37
| downloads_last_month
int32 0
9.44M
| library
stringclasses 15
values | modelCard
large_stringlengths 0
100k
|
---|---|---|---|---|---|---|---|---|
Eirca/add_vocab_fin | 2021-03-13T22:53:35.000Z | [] | [
".gitattributes"
] | Eirca | 0 | |||
Eirca/vocab_add_fin | 2021-03-14T04:42:43.000Z | [] | [
".gitattributes"
] | Eirca | 0 | |||
Elbe/RoBERTaforIns | 2021-05-20T11:47:50.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"training_args.bin",
"vocab.json"
] | Elbe | 12 | transformers | |
Elbe/RoBERTaforIns_2 | 2020-11-15T22:08:30.000Z | [] | [
".gitattributes"
] | Elbe | 0 | |||
Elbe/RoBERTaforIns_full | 2020-11-15T22:14:28.000Z | [] | [
".gitattributes"
] | Elbe | 0 | |||
EleutherAI/gpt-neo-1.3B | 2021-05-20T23:59:56.000Z | [
"pytorch",
"rust",
"gpt_neo",
"causal-lm",
"en",
"dataset:the Pile",
"transformers",
"text generation",
"the Pile",
"license:apache-2.0",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"rust_model.ot",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | EleutherAI | 43,938 | transformers | ---
language:
- en
tags:
- text generation
- pytorch
- the Pile
- causal-lm
license: apache-2.0
datasets:
- the Pile
---
# GPT-Neo 1.3B
## Model Description
GPT-Neo 1.3B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 1.3B represents the number of parameters of this particular pre-trained model.
## Training data
GPT-Neo 1.3B was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model.
## Training procedure
This model was trained on the Pile for 380 billion tokens over 362,000 steps. It was trained as a masked autoregressive language model, using cross-entropy loss.
## Intended Use and Limitations
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='EleutherAI/gpt-neo-1.3B')
>>> generator("EleutherAI has", do_sample=True, min_length=50)
[{'generated_text': 'EleutherAI has made a commitment to create new software packages for each of its major clients and has'}]
```
### Limitations and Biases
GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.
GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Eval results
### Linguistic Reasoning
| Model and Size | Pile BPB | Pile PPL | Wikitext PPL | Lambada PPL | Lambada Acc | Winogrande | Hellaswag |
| ---------------- | ---------- | ---------- | ------------- | ----------- | ----------- | ---------- | ----------- |
| **GPT-Neo 1.3B** | **0.7527** | **6.159** | **13.10** | **7.498** | **57.23%** | **55.01%** | **38.66%** |
| GPT-2 1.5B | 1.0468 | ----- | 17.48 | 10.634 | 51.21% | 59.40% | 40.03% |
| GPT-Neo 2.7B | 0.7165 | 5.646 | 11.39 | 5.626 | 62.22% | 56.50% | 42.73% |
| GPT-3 Ada | 0.9631 | ----- | ----- | 9.954 | 51.60% | 52.90% | 35.93% |
### Physical and Scientific Reasoning
| Model and Size | MathQA | PubMedQA | Piqa |
| ---------------- | ---------- | ---------- | ----------- |
| **GPT-Neo 1.3B** | **24.05%** | **54.40%** | **71.11%** |
| GPT-2 1.5B | 23.64% | 58.33% | 70.78% |
| GPT-Neo 2.7B | 24.72% | 57.54% | 72.14% |
| GPT-3 Ada | 24.29% | 52.80% | 68.88% |
### Down-Stream Applications
TBD
### BibTeX entry and citation info
To cite this model, use
```bibtex
@article{gao2020pile,
title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others},
journal={arXiv preprint arXiv:2101.00027},
year={2020}
}
```
To cite the codebase that this model was trained with, use
```bibtex
@software{gpt-neo,
author = {Black, Sid and Gao, Leo and Wang, Phil and Leahy, Connor and Biderman, Stella},
title = {{GPT-Neo}: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow},
url = {http://github.com/eleutherai/gpt-neo},
version = {1.0},
year = {2021},
}
```
|
EleutherAI/gpt-neo-125M | 2021-05-20T23:57:27.000Z | [
"pytorch",
"rust",
"gpt_neo",
"causal-lm",
"en",
"dataset:the Pile",
"transformers",
"text generation",
"the Pile",
"license:apache-2.0",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"rust_model.ot",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | EleutherAI | 22,648 | transformers | ---
language:
- en
tags:
- text generation
- pytorch
- the Pile
- causal-lm
license: apache-2.0
datasets:
- the Pile
---
# GPT-Neo 125M
## Model Description
GPT-Neo 125M is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 125M represents the number of parameters of this particular pre-trained model.
## Training data
GPT-Neo 125M was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model.
## Training procedure
This model was trained on the Pile for 300 billion tokens over 572,300 steps. It was trained as a masked autoregressive language model, using cross-entropy loss.
## Intended Use and Limitations
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='EleutherAI/gpt-neo-125M')
>>> generator("EleutherAI has", do_sample=True, min_length=50)
[{'generated_text': 'EleutherAI has made a commitment to create new software packages for each of its major clients and has'}]
```
### Limitations and Biases
GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.
GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Eval results
TBD
### Down-Stream Applications
TBD
### BibTeX entry and citation info
To cite this model, use
```bibtex
@article{gao2020pile,
title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others},
journal={arXiv preprint arXiv:2101.00027},
year={2020}
}
```
To cite the codebase that this model was trained with, use
```bibtex
@software{gpt-neo,
author = {Black, Sid and Gao, Leo and Wang, Phil and Leahy, Connor and Biderman, Stella},
title = {{GPT-Neo}: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow},
url = {http://github.com/eleutherai/gpt-neo},
version = {1.0},
year = {2021},
}
```
|
EleutherAI/gpt-neo-2.7B | 2021-05-21T00:00:44.000Z | [
"pytorch",
"rust",
"gpt_neo",
"causal-lm",
"en",
"dataset:the Pile",
"transformers",
"text generation",
"the Pile",
"license:apache-2.0",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"rust_model.ot",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | EleutherAI | 99,979 | transformers | ---
language:
- en
tags:
- text generation
- pytorch
- the Pile
- causal-lm
license: apache-2.0
datasets:
- the Pile
---
# GPT-Neo 2.7B
## Model Description
GPT-Neo 2.7B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 2.7B represents the number of parameters of this particular pre-trained model.
## Training data
GPT-Neo 2.7B was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model.
## Training procedure
This model was trained for 420 billion tokens over 400,000 steps. It was trained as a masked autoregressive language model, using cross-entropy loss.
## Intended Use and Limitations
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B')
>>> generator("EleutherAI has", do_sample=True, min_length=50)
[{'generated_text': 'EleutherAI has made a commitment to create new software packages for each of its major clients and has'}]
```
### Limitations and Biases
GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.
GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Eval results
All evaluations were done using our [evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness). Some results for GPT-2 and GPT-3 are inconsistent with the values reported in the respective papers. We are currently looking into why, and would greatly appreciate feedback and further testing of our eval harness. If you would like to contribute evaluations you have done, please reach out on our [Discord](https://discord.gg/vtRgjbM).
### Linguistic Reasoning
| Model and Size | Pile BPB | Pile PPL | Wikitext PPL | Lambada PPL | Lambada Acc | Winogrande | Hellaswag |
| ---------------- | ---------- | ---------- | ------------- | ----------- | ----------- | ---------- | ----------- |
| GPT-Neo 1.3B | 0.7527 | 6.159 | 13.10 | 7.498 | 57.23% | 55.01% | 38.66% |
| GPT-2 1.5B | 1.0468 | ----- | 17.48 | 10.634 | 51.21% | 59.40% | 40.03% |
| **GPT-Neo 2.7B** | **0.7165** | **5.646** | **11.39** | **5.626** | **62.22%** | **56.50%** | **42.73%** |
| GPT-3 Ada | 0.9631 | ----- | ----- | 9.954 | 51.60% | 52.90% | 35.93% |
### Physical and Scientific Reasoning
| Model and Size | MathQA | PubMedQA | Piqa |
| ---------------- | ---------- | ---------- | ----------- |
| GPT-Neo 1.3B | 24.05% | 54.40% | 71.11% |
| GPT-2 1.5B | 23.64% | 58.33% | 70.78% |
| **GPT-Neo 2.7B** | **24.72%** | **57.54%** | **72.14%** |
| GPT-3 Ada | 24.29% | 52.80% | 68.88% |
### Down-Stream Applications
TBD
### BibTeX entry and citation info
To cite this model, use
```bibtex
@article{gao2020pile,
title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others},
journal={arXiv preprint arXiv:2101.00027},
year={2020}
}
```
To cite the codebase that this model was trained with, use
```bibtex
@software{gpt-neo,
author = {Black, Sid and Gao, Leo and Wang, Phil and Leahy, Connor and Biderman, Stella},
title = {{GPT-Neo}: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow},
url = {http://github.com/eleutherai/gpt-neo},
version = {1.0},
year = {2021},
}
```
|
Elliejone/Ellie | 2021-06-02T22:17:14.000Z | [] | [
".gitattributes"
] | Elliejone | 0 | |||
Elluran/Hate_speech_detector | 2021-05-20T11:49:13.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | Elluran | 13 | transformers | |
Emi2160/DialoGPT-small-Neku | 2021-06-03T14:04:12.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"conversational",
"text-generation"
] | conversational | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
] | Emi2160 | 48 | transformers | ---
tags:
- conversational
---
# My Awesome Model |
Emirhan/51k-finetuned-bert-model | 2021-06-04T17:35:07.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin"
] | Emirhan | 12 | transformers | |
Enutodu/QnA | 2021-04-21T04:23:52.000Z | [] | [
".gitattributes"
] | Enutodu | 0 | |||
ErykWdowiak/GPTalian | 2021-05-21T09:42:05.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"en",
"it",
"scn",
"nap",
"transformers",
"exbert",
"license:apache-2.0",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json",
"scripts/01_prepare-bonaffini.pl",
"scripts/02_initialize-gptalian.py",
"scripts/03_pretrain-gptalian.sh",
"scripts/04_play-w-gptalian.ipynb",
"scripts/LICENSE",
"scripts/run_clm.py"
] | ErykWdowiak | 36 | transformers | ---
language:
- en
- it
- scn
- nap
tags:
- exbert
- gpt2
license: apache-2.0
---
# GPTalian
This is a GPT2 model of Italian regional languages trained on [collections of Italian "dialect poetry"](http://dialectpoetry.com) by Luigi Bonaffini.
This is a multilingual model. Italians use the word "dialect" to describe their regional languages, but they are separate languages. And there's a lot of English in this dataset too.
The challenge of this project is to train a model to write the languages of Italy.
For those who do not know Italian, here's some (lowercase) text that you can type into the API box:
- oggi si parla il dialetto
- la sua poesia viene di
- ma non sempre trova
|
EthonLee/Lethon202103test001 | 2021-01-03T11:42:46.000Z | [] | [
".gitattributes"
] | EthonLee | 0 | |||
Eunji/kant | 2021-03-29T15:07:45.000Z | [
"tensorboard"
] | [
".gitattributes",
"checkpoint/run1/checkpoint",
"checkpoint/run1/counter",
"checkpoint/run1/encoder.json",
"checkpoint/run1/events.out.tfevents.1615780798.ce057e2444ba",
"checkpoint/run1/hparams.json",
"checkpoint/run1/model-1000.data-00000-of-00001",
"checkpoint/run1/model-1000.index",
"checkpoint/run1/model-1000.meta",
"checkpoint/run1/vocab.bpe"
] | Eunji | 0 | |||
Eunku/KorLangModel | 2021-03-22T15:32:52.000Z | [] | [
".gitattributes"
] | Eunku | 0 | |||
FFZG-cleopatra/bert-emoji-latvian-twitter | 2021-05-18T18:33:26.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | FFZG-cleopatra | 43 | transformers | |
FPTAI/velectra-base-discriminator-cased | 2020-09-30T03:52:16.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"vocab.txt"
] | FPTAI | 319 | transformers | ||
FPTAI/vibert-base-cased | 2021-05-19T11:15:49.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
] | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"vocab.txt"
] | FPTAI | 1,247 | transformers | ||
Fabby/gpt2-english-light-novel-titles | 2021-03-08T15:30:42.000Z | [] | [
".gitattributes",
"README.md",
"model/added_tokens.json",
"model/config.json",
"model/merges.txt",
"model/pytorch_model.bin",
"model/special_tokens_map.json",
"model/tokenizer_config.json",
"model/vocab.json"
] | Fabby | 0 | |||
Fang/Titania | 2021-03-25T09:59:13.000Z | [] | [
".gitattributes"
] | Fang | 0 | |||
Fatemah/salamBERT | 2021-01-24T09:10:02.000Z | [] | [
".gitattributes"
] | Fatemah | 0 | |||
FelipeV/bert-base-spanish-uncased-sentiment | 2021-01-21T16:49:27.000Z | [] | [
".gitattributes"
] | FelipeV | 0 | |||
Ferch423/gpt2-small-portuguese-wikipediabio | 2021-05-21T09:42:53.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"pt",
"dataset:wikipedia",
"transformers",
"wikipedia",
"finetuning",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"eval_results_clm.txt",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | Ferch423 | 52 | transformers | ---
language: "pt"
tags:
- pt
- wikipedia
- gpt2
- finetuning
datasets:
- wikipedia
widget:
- "André Um"
- "Maria do Santos"
- "Roberto Carlos"
licence: "mit"
---
# GPT2-SMALL-PORTUGUESE-WIKIPEDIABIO
This is a finetuned model version of gpt2-small-portuguese(https://huggingface.co/pierreguillou/gpt2-small-portuguese) by pierreguillou.
It was trained on a person abstract dataset extracted from DBPEDIA (over 100000 people's abstracts). The model is intended as a simple and fun experiment for generating texts abstracts based on ordinary people's names. |
Fidlobabovic/beta-kvantorium-simple-small | 2021-05-20T11:50:06.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config (2).json",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training_args (1).bin",
"vocab.txt"
] | Fidlobabovic | 15 | transformers | Beta-kavntorium-simple-small is a transformers model RoBerta pretrained on a large corpus of Russion kvantorim data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with objective:
Automate communication with the Quantorium community and mentors. |
Fidlobabovic/beta-kvantorium-small | 2021-05-20T11:50:54.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"training_args.bin",
"vocab.json"
] | Fidlobabovic | 10 | transformers | Beta-kavntorium-simple-small is a transformers model RoBerta pretrained on a large corpus of Russion kvantorim data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with objective:
Automate communication with the Quantorium community and mentors.
https://sun9-49.userapi.com/impg/CIJZKA_r9xoLYd47Lvjv_8jyu6epadPyergP3Q/zw3J_E6IlJo.jpg?size=546x385&quality=96&sign=139fa29b864d36958feab4731cc684dc&type=album |
FirmanBr/FirmanBrilianBert | 2021-05-18T18:35:52.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"training_args.bin",
"vocab.txt",
"checkpoint-130000/config.json",
"checkpoint-130000/optimizer.pt",
"checkpoint-130000/pytorch_model.bin",
"checkpoint-130000/scheduler.pt",
"checkpoint-130000/trainer_state.json",
"checkpoint-130000/training_args.bin",
"checkpoint-140000/config.json",
"checkpoint-140000/optimizer.pt",
"checkpoint-140000/pytorch_model.bin",
"checkpoint-140000/scheduler.pt",
"checkpoint-140000/trainer_state.json",
"checkpoint-140000/training_args.bin",
"hasil/config.json",
"hasil/pytorch_model.bin",
"hasil/training_args.bin"
] | FirmanBr | 12 | transformers | |
FirmanBr/FirmanIndoLanguageModel | 2021-05-18T18:37:51.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"model_args.json",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | FirmanBr | 8 | transformers | |
FirmanBr/chibibot | 2021-05-18T18:39:06.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"model_args.json",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | FirmanBr | 9 | transformers | |
For/sheldonbot | 2021-06-02T15:54:07.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"conversational",
"text-generation"
] | conversational | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
] | For | 39 | transformers | ---
tags:
- conversational
---
#
|
Forest/gpt2-fanfic | 2021-05-21T09:44:04.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | Forest | 14 | transformers | |
Francesco/dummy | 2021-04-09T14:53:50.000Z | [
"pytorch",
"transformers"
] | [
".gitattributes",
"config.json",
"pytorch_model.bin"
] | Francesco | 8 | transformers | ||
Francesco/resnet18 | 2021-04-09T16:00:44.000Z | [
"pytorch",
"transformers"
] | [
".gitattributes",
"config.json",
"pytorch_model.bin"
] | Francesco | 8 | transformers | ||
Frankie2666/twitter-roberta-base-sentiment | 2021-03-05T21:37:33.000Z | [] | [
".gitattributes"
] | Frankie2666 | 0 | |||
Fred/Cows | 2021-02-23T09:29:17.000Z | [] | [
".gitattributes"
] | Fred | 0 | |||
Froggie/just-testing | 2021-03-09T08:04:21.000Z | [] | [
".gitattributes"
] | Froggie | 0 | |||
Fujitsu/AugCode | 2021-05-20T11:51:49.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"en",
"dataset:augmented_codesearchnet",
"transformers",
"license:mit"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
] | Fujitsu | 31 | transformers | ---
inference: false
license: mit
widget:
language:
- en
metrics:
- mrr
datasets:
- augmented_codesearchnet
---
# 🔥 Augmented Code Model 🔥
This is Augmented Code Model which is a fined-tune model of [CodeBERT](https://huggingface.co/microsoft/codebert-base) for processing of similarity between given docstring and code. This model is fined-model based on Augmented Code Corpus with ACS=4.
## How to use the model ?
Similar to other huggingface model, you may load the model as follows.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Fujitsu/AugCode")
model = AutoModelForSequenceClassification.from_pretrained("Fujitsu/AugCode")
```
Then you may use `model` to infer the similarity between a given docstring and code.
### Citation
```bibtex@misc{bahrami2021augcode,
title={AugmentedCode: Examining the Effects of Natural Language Resources in Code Retrieval Models},
author={Mehdi Bahrami, N. C. Shrikanth, Yuji Mizobuchi, Lei Liu, Masahiro Fukuyori, Wei-Peng Chen, Kazuki Munakata},
year={2021},
eprint={TBA},
archivePrefix={TBA},
primaryClass={cs.CL}
}
``` |
Fujitsu/pytorrent | 2021-05-20T11:52:40.000Z | [
"pytorch",
"jax",
"roberta",
"en",
"dataset:pytorrent",
"transformers",
"license:mit"
] | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"vocab.json"
] | Fujitsu | 14 | transformers | ---
license: mit
widget:
language:
- en
datasets:
- pytorrent
---
# 🔥 RoBERTa-MLM-based PyTorrent 1M 🔥
Pretrained weights based on [PyTorrent Dataset](https://github.com/fla-sil/PyTorrent) which is a curated data from a large official Python packages.
We use PyTorrent dataset to train a preliminary DistilBERT-Masked Language Modeling(MLM) model from scratch. The trained model, along with the dataset, aims to help researchers to easily and efficiently work on a large dataset of Python packages using only 5 lines of codes to load the transformer-based model. We use 1M raw Python scripts of PyTorrent that includes 12,350,000 LOC to train the model. We also train a byte-level Byte-pair encoding (BPE) tokenizer that includes 56,000 tokens, which is truncated LOC with the length of 50 to save computation resources.
### Training Objective
This model is trained with a Masked Language Model (MLM) objective.
## How to use the model?
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Fujitsu/pytorrent")
model = AutoModel.from_pretrained("Fujitsu/pytorrent")
``` |
|
Furkan/Furkan | 2021-02-12T09:49:08.000Z | [] | [
".gitattributes"
] | Furkan | 0 | |||
GD/bert-base-uncased-gh | 2020-12-27T15:28:27.000Z | [] | [
".gitattributes"
] | GD | 0 | |||
GD/cq-bert-model-repo | 2021-05-18T18:40:31.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.txt",
"checkpoint-3042/config.json",
"checkpoint-3042/optimizer.pt",
"checkpoint-3042/pytorch_model.bin",
"checkpoint-3042/scheduler.pt",
"checkpoint-3042/special_tokens_map.json",
"checkpoint-3042/tokenizer_config.json",
"checkpoint-3042/trainer_state.json",
"checkpoint-3042/training_args.bin",
"checkpoint-3042/vocab.txt"
] | GD | 14 | transformers | |
GD/qqp-bert-model-repo | 2021-03-28T20:28:23.000Z | [] | [
".gitattributes"
] | GD | 0 | |||
GD/qqp_1_glue_cased_ | 2021-03-17T01:25:49.000Z | [] | [
".gitattributes",
"README.md",
"qqp_#1_glue_cased/checkpoint-11371/config.json",
"qqp_#1_glue_cased/checkpoint-11371/optimizer.pt",
"qqp_#1_glue_cased/checkpoint-11371/pytorch_model.bin",
"qqp_#1_glue_cased/checkpoint-11371/scheduler.pt",
"qqp_#1_glue_cased/checkpoint-11371/special_tokens_map.json",
"qqp_#1_glue_cased/checkpoint-11371/tokenizer_config.json",
"qqp_#1_glue_cased/checkpoint-11371/trainer_state.json",
"qqp_#1_glue_cased/checkpoint-11371/training_args.bin",
"qqp_#1_glue_cased/checkpoint-11371/vocab.txt",
"qqp_#1_glue_cased/checkpoint-22742/config.json",
"qqp_#1_glue_cased/checkpoint-22742/optimizer.pt",
"qqp_#1_glue_cased/checkpoint-22742/pytorch_model.bin",
"qqp_#1_glue_cased/checkpoint-22742/scheduler.pt",
"qqp_#1_glue_cased/checkpoint-22742/special_tokens_map.json",
"qqp_#1_glue_cased/checkpoint-22742/tokenizer_config.json",
"qqp_#1_glue_cased/checkpoint-22742/trainer_state.json",
"qqp_#1_glue_cased/checkpoint-22742/training_args.bin",
"qqp_#1_glue_cased/checkpoint-22742/vocab.txt",
"qqp_#1_glue_cased/checkpoint-34113/config.json",
"qqp_#1_glue_cased/checkpoint-34113/optimizer.pt",
"qqp_#1_glue_cased/checkpoint-34113/pytorch_model.bin",
"qqp_#1_glue_cased/checkpoint-34113/scheduler.pt",
"qqp_#1_glue_cased/checkpoint-34113/special_tokens_map.json",
"qqp_#1_glue_cased/checkpoint-34113/tokenizer_config.json",
"qqp_#1_glue_cased/checkpoint-34113/trainer_state.json",
"qqp_#1_glue_cased/checkpoint-34113/training_args.bin",
"qqp_#1_glue_cased/checkpoint-34113/vocab.txt",
"qqp_#1_glue_cased/checkpoint-45484/config.json",
"qqp_#1_glue_cased/checkpoint-45484/optimizer.pt",
"qqp_#1_glue_cased/checkpoint-45484/pytorch_model.bin",
"qqp_#1_glue_cased/checkpoint-45484/scheduler.pt",
"qqp_#1_glue_cased/checkpoint-45484/special_tokens_map.json",
"qqp_#1_glue_cased/checkpoint-45484/tokenizer_config.json",
"qqp_#1_glue_cased/checkpoint-45484/trainer_state.json",
"qqp_#1_glue_cased/checkpoint-45484/training_args.bin",
"qqp_#1_glue_cased/checkpoint-45484/vocab.txt",
"qqp_#1_glue_cased/checkpoint-56855/config.json",
"qqp_#1_glue_cased/checkpoint-56855/optimizer.pt",
"qqp_#1_glue_cased/checkpoint-56855/pytorch_model.bin",
"qqp_#1_glue_cased/checkpoint-56855/scheduler.pt",
"qqp_#1_glue_cased/checkpoint-56855/special_tokens_map.json",
"qqp_#1_glue_cased/checkpoint-56855/tokenizer_config.json",
"qqp_#1_glue_cased/checkpoint-56855/trainer_state.json",
"qqp_#1_glue_cased/checkpoint-56855/training_args.bin",
"qqp_#1_glue_cased/checkpoint-56855/vocab.txt"
] | GD | 0 | |||
GD/qqp_multi_glue_cased_finish_5_epochs_only_qqp_repo | 2021-04-02T13:30:26.000Z | [] | [
".gitattributes"
] | GD | 0 | |||
GHP/Jocker | 2021-05-10T14:53:19.000Z | [] | [
".gitattributes"
] | GHP | 0 | |||
GHP/gpt2-fine-tuned | 2021-05-11T14:20:38.000Z | [] | [
".gitattributes"
] | GHP | 0 | |||
Galuh/wav2vec2-large-xlsr-indonesian | 2021-03-30T10:39:00.000Z | [
"pytorch",
"wav2vec2",
"id",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] | automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | Galuh | 10 | transformers | ---
language: id
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Indonesian by Galuh
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice id
type: common_voice
args: id
metrics:
- name: Test WER
type: wer
value: 21.07
---
# Wav2Vec2-Large-XLSR-Indonesian
This is the model for Wav2Vec2-Large-XLSR-Indonesian, a fine-tuned
[facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
model on the [Indonesian Common Voice dataset](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "id", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("Galuh/wav2vec2-large-xlsr-indonesian")
model = Wav2Vec2ForCTC.from_pretrained("Galuh/wav2vec2-large-xlsr-indonesian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "id", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Galuh/wav2vec2-large-xlsr-indonesian")
model = Wav2Vec2ForCTC.from_pretrained("Galuh/wav2vec2-large-xlsr-indonesian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 18.32 %
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found [here](https://github.com/galuhsahid/wav2vec2-indonesian)
(will be available soon) |
Galuh/xlsr-indonesian | 2021-03-27T12:49:46.000Z | [
"pytorch",
"wav2vec2",
"transformers"
] | [
".DS_Store",
".gitattributes",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | Galuh | 8 | transformers | ||
GanjinZero/UMLSBert_ALL | 2021-05-19T11:16:13.000Z | [
"pytorch",
"bert",
"transformers"
] | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"vocab.txt"
] | GanjinZero | 125 | transformers | ||
GanjinZero/UMLSBert_ENG | 2021-05-19T11:16:23.000Z | [
"pytorch",
"bert",
"transformers"
] | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"vocab.txt"
] | GanjinZero | 890 | transformers | ||
Gantenbein/ADDI-CH-GPT2 | 2021-06-02T13:58:54.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"all_results.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_results.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
] | Gantenbein | 35 | transformers | |
Gantenbein/ADDI-CH-RoBERTa | 2021-06-01T13:54:05.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"all_results.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_results.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
] | Gantenbein | 44 | transformers | |
Gantenbein/ADDI-CH-XLM-R | 2021-06-01T13:55:25.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"all_results.json",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json",
"train_results.json",
"trainer_state.json",
"training_args.bin"
] | Gantenbein | 14 | transformers | |
Gantenbein/ADDI-DE-GPT2 | 2021-06-01T14:29:34.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"all_results.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_results.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
] | Gantenbein | 9 | transformers | |
Gantenbein/ADDI-DE-RoBERTa | 2021-06-01T14:30:17.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"all_results.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_results.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
] | Gantenbein | 23 | transformers | |
Gantenbein/ADDI-DE-XLM-R | 2021-06-01T14:31:33.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"all_results.json",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json",
"train_results.json",
"trainer_state.json",
"training_args.bin"
] | Gantenbein | 4 | transformers | |
Gantenbein/ADDI-FI-GPT2 | 2021-06-01T14:11:36.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"all_results.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_results.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
] | Gantenbein | 6 | transformers | |
Gantenbein/ADDI-FI-RoBERTa | 2021-06-01T14:12:02.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"all_results.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_results.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
] | Gantenbein | 7 | transformers | |
Gantenbein/ADDI-FI-XLM-R | 2021-06-01T14:12:53.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"all_results.json",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json",
"train_results.json",
"trainer_state.json",
"training_args.bin"
] | Gantenbein | 9 | transformers | |
Gantenbein/ADDI-FR-GPT2 | 2021-06-01T14:07:50.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"all_results.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_results.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
] | Gantenbein | 79 | transformers | |
Gantenbein/ADDI-FR-RoBERTa | 2021-06-01T14:07:22.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"all_results.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_results.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
] | Gantenbein | 7 | transformers | |
Gantenbein/ADDI-FR-XLM-R | 2021-06-01T14:06:53.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"all_results.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_results.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
] | Gantenbein | 5 | transformers | |
Gantenbein/ADDI-IT-GPT2 | 2021-06-01T14:25:36.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"all_results.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_results.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
] | Gantenbein | 4 | transformers | |
Gantenbein/ADDI-IT-RoBERTa | 2021-06-01T14:25:12.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"all_results.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_results.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
] | Gantenbein | 13 | transformers | |
Gantenbein/ADDI-IT-XLM-R | 2021-06-01T14:24:52.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"all_results.json",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json",
"train_results.json",
"trainer_state.json",
"training_args.bin"
] | Gantenbein | 9 | transformers | |
Gastron/asr-crdnn-librispeech | 2021-02-26T15:23:04.000Z | [
"en",
"dataset:librispeech",
"ASR",
"CTC",
"Attention",
"pytorch",
"license:apache-2.0"
] | [
".gitattributes",
"README.md",
"asr.ckpt",
"hyperparams.yaml",
"lm.ckpt",
"normalizer.ckpt",
"tokenizer.ckpt"
] | Gastron | 7 | ---
language: "en"
thumbnail:
tags:
- ASR
- CTC
- Attention
- pytorch
license: "apache-2.0"
datasets:
- librispeech
metrics:
- wer
- cer
---
# CRDNN with CTC/Attention and RNNLM trained on LibriSpeech
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on LibriSpeech (EN) within
SpeechBrain. For a better experience we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io). The given ASR model performance are:
| Release | hyperparams file | Test WER | Model link | GPUs |
|:-------------:|:---------------------------:| -----:| -----:| --------:|
| 20-05-22 | BPE_1000.yaml | 3.08 | Not Available | 1xV100 32GB |
| 20-05-22 | BPE_5000.yaml | 2.89 | Not Available | 1xV100 32GB |
## Pipeline description
This ASR system is composed with 3 different but linked blocks:
1. Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions of LibriSpeech.
2. Neural language model (RNNLM) trained on the full 10M words dataset.
3. Acoustic model (CRDNN + CTC/Attention). The CRDNN architecture is made of
N blocks of convolutional neural networks with normalisation and pooling on the
frequency domain. Then, a bidirectional LSTM is connected to a final DNN to obtain
the final acoustic representation that is given to the CTC and attention decoders.
## Intended uses & limitations
This model has been primilarly developed to be run within SpeechBrain as a pretrained ASR model
for the english language. Thanks to the flexibility of SpeechBrain, any of the 3 blocks
detailed above can be extracted and connected to you custom pipeline as long as SpeechBrain is
installed.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install \\we hide ! SpeechBrain is still private :p
```
Also, for this model, you need SentencePiece. Install with
```
pip install sentencepiece
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Transcribing your own audio files
```python
from speechbrain.pretrained import EncoderDecoderASR
asr_model = EncoderDecoderASR.from_hparams(source="Gastron/asr-crdnn-librispeech")
asr_model.transcribe_file("path_to_your_file.wav")
```
### Obtaining encoded features
The SpeechBrain EncoderDecoderASR() class also provides an easy way to encode
the speech signal without running the decoding phase by calling
``EncoderDecoderASR.encode_batch()``
#### Referencing SpeechBrain
```
@misc{SB2021,
author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua },
title = {SpeechBrain},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/speechbrain/speechbrain}},
}
```
|
||
Ge/model_name | 2021-05-08T04:33:20.000Z | [] | [
".gitattributes"
] | Ge | 0 | |||
Geotrend/bert-base-10lang-cased | 2021-05-18T18:42:35.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 13 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
- text: "Paris est la [MASK] de la France."
- text: "Paris est la capitale de la [MASK]."
- text: "L'élection américaine a eu [MASK] en novembre 2020."
- text: "تقع سويسرا في [MASK] أوروبا"
- text: "إسمي محمد وأسكن في [MASK]."
---
# bert-base-10lang-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
This model handles the following languages: english, french, spanish, german, chinese, arabic, russian, portuguese, italian, and urdu. It produces the same representations as [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) while being 22.5% smaller in size.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-10lang-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-10lang-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Multilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-15lang-cased | 2021-05-18T18:45:34.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 73 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
- text: "Paris est la [MASK] de la France."
- text: "Paris est la capitale de la [MASK]."
- text: "L'élection américaine a eu [MASK] en novembre 2020."
- text: "تقع سويسرا في [MASK] أوروبا"
- text: "إسمي محمد وأسكن في [MASK]."
---
# bert-base-15lang-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
The measurements below have been computed on a [Google Cloud n1-standard-1 machine (1 vCPU, 3.75 GB)](https://cloud.google.com/compute/docs/machine-types\#n1_machine_type):
| Model | Num parameters | Size | Memory | Loading time |
| ------------------------------- | -------------- | -------- | -------- | ------------ |
| bert-base-multilingual-cased | 178 million | 714 MB | 1400 MB | 4.2 sec |
| Geotrend/bert-base-15lang-cased | 141 million | 564 MB | 1098 MB | 3.1 sec |
Handled languages: en, fr, es, de, zh, ar, ru, vi, el, bg, th, tr, hi, ur and sw.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-15lang-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-15lang-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Multilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-25lang-cased | 2021-05-18T18:46:59.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 11 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
- text: "Paris est la [MASK] de la France."
- text: "Paris est la capitale de la [MASK]."
- text: "L'élection américaine a eu [MASK] en novembre 2020."
- text: "تقع سويسرا في [MASK] أوروبا"
- text: "إسمي محمد وأسكن في [MASK]."
---
# bert-base-25lang-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
Handled languages: en, fr, es, de, zh, ar, ru, vi, el, bg, th, tr, hi, ur, sw, nl, uk, ro, pt, it, lt, no, pl, da and ja.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-25lang-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-25lang-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Multilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-ar-cased | 2021-05-18T18:47:56.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"ar",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 24 | transformers | ---
language: ar
datasets: wikipedia
license: apache-2.0
widget:
- text: "تقع سويسرا في [MASK] أوروبا"
- text: "إسمي محمد وأسكن في [MASK]."
---
# bert-base-ar-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-ar-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-ar-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-bg-cased | 2021-05-18T18:48:47.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"bg",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 26 | transformers | ---
language: bg
datasets: wikipedia
license: apache-2.0
---
# bert-base-bg-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-bg-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-bg-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-da-cased | 2021-05-18T18:49:40.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"da",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 12 | transformers | ---
language: da
datasets: wikipedia
license: apache-2.0
---
# bert-base-da-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-da-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-da-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-de-cased | 2021-05-18T18:58:49.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"de",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 30 | transformers | ---
language: de
datasets: wikipedia
license: apache-2.0
---
# bert-base-de-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-de-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-de-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-el-cased | 2021-05-18T19:00:19.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"el",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 37 | transformers | ---
language: el
datasets: wikipedia
license: apache-2.0
---
# bert-base-el-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-el-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-el-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-en-ar-cased | 2021-05-18T19:01:15.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 19 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
- text: "تقع سويسرا في [MASK] أوروبا"
- text: "إسمي محمد وأسكن في [MASK]."
---
# bert-base-en-ar-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-ar-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-ar-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-en-bg-cased | 2021-05-18T19:02:31.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 22 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
---
# bert-base-en-bg-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-bg-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-bg-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-en-cased | 2021-05-18T19:03:33.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"en",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 51 | transformers | ---
language: en
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
---
# bert-base-en-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-en-da-cased | 2021-05-18T19:04:34.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 11 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-da-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-da-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-da-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-en-de-cased | 2021-05-18T19:05:53.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 263 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
---
# bert-base-en-de-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-de-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-de-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-en-el-cased | 2021-05-18T19:06:56.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 24 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
---
# bert-base-en-el-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-el-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-el-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-en-el-ru-cased | 2021-05-18T19:07:56.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 10 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-el-ru-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-el-ru-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-el-ru-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-en-es-cased | 2021-05-18T19:08:56.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 36 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
---
# bert-base-en-es-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-es-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-es-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-en-es-it-cased | 2021-05-18T19:10:03.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 10 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-es-it-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-es-it-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-es-it-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-en-es-pt-cased | 2021-05-18T19:11:40.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 9 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-es-pt-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-es-pt-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-es-pt-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-en-es-zh-cased | 2021-05-18T19:13:08.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 10 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-es-zh-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-es-zh-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-es-zh-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-en-fr-ar-cased | 2021-05-18T19:14:08.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 55 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-fr-ar-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-ar-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-ar-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-en-fr-cased | 2021-05-18T19:15:20.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 382 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
- text: "Paris est la [MASK] de la France."
- text: "Paris est la capitale de la [MASK]."
- text: "L'élection américaine a eu [MASK] en novembre 2020."
---
# bert-base-en-fr-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Multilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-en-fr-da-ja-vi-cased | 2021-05-18T19:17:37.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 9 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-fr-da-ja-vi-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-da-ja-vi-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-da-ja-vi-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-en-fr-de-cased | 2021-05-18T19:18:39.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 10 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-fr-de-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-de-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-de-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-en-fr-de-no-da-cased | 2021-05-18T19:19:41.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 12 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-fr-de-no-da-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-de-no-da-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-de-no-da-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-en-fr-es-cased | 2021-05-18T19:21:01.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 55 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-fr-es-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-es-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-es-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-en-fr-es-de-zh-cased | 2021-05-18T19:22:08.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 9 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-fr-es-de-zh-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-es-de-zh-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-es-de-zh-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-en-fr-es-pt-it-cased | 2021-05-18T19:23:18.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 9 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-fr-es-pt-it-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-es-pt-it-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-es-pt-it-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Multilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-en-fr-it-cased | 2021-05-18T19:24:24.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 12 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-fr-it-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-it-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-it-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-en-fr-lt-no-pl-cased | 2021-05-18T19:25:38.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 10 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-fr-lt-no-pl-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-lt-no-pl-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-lt-no-pl-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-en-fr-nl-ru-ar-cased | 2021-05-18T19:26:42.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 10 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-fr-nl-ru-ar-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-nl-ru-ar-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-nl-ru-ar-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-en-fr-uk-el-ro-cased | 2021-05-18T19:27:52.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 11 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-fr-uk-el-ro-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-uk-el-ro-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-uk-el-ro-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-en-fr-zh-cased | 2021-05-18T19:29:01.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 12 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-fr-zh-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-zh-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-zh-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |