modelId
stringlengths 4
112
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 21
values | files
sequence | publishedBy
stringlengths 2
37
| downloads_last_month
int32 0
9.44M
| library
stringclasses 15
values | modelCard
large_stringlengths 0
100k
|
---|---|---|---|---|---|---|---|---|
Geotrend/bert-base-en-fr-zh-ja-vi-cased | 2021-05-18T19:30:16.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 11 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-fr-zh-ja-vi-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-zh-ja-vi-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-zh-ja-vi-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-en-hi-cased | 2021-05-18T19:31:14.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 94 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
---
# bert-base-en-hi-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-hi-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-hi-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-en-it-cased | 2021-05-18T19:32:13.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 10 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-it-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-it-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-it-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-en-ja-cased | 2021-05-18T19:34:23.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 11 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-ja-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-ja-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-ja-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-en-lt-cased | 2021-05-18T19:38:31.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 10 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-lt-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-lt-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-lt-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-en-nl-cased | 2021-05-18T19:39:31.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 12 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-nl-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-nl-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-nl-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-en-no-cased | 2021-05-18T19:40:40.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 13 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-no-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-no-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-no-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-en-pl-cased | 2021-05-18T19:41:44.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 9 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-pl-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-pl-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-pl-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-en-pt-cased | 2021-05-18T19:42:48.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 12 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-pt-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-pt-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-pt-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-en-ro-cased | 2021-05-18T19:43:58.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 13 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-ro-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-ro-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-ro-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-en-ru-cased | 2021-05-18T19:45:10.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 35 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
---
# bert-base-en-ru-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-ru-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-ru-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-en-sw-cased | 2021-05-18T19:46:12.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 45 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
---
# bert-base-en-sw-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-sw-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-sw-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-en-th-cased | 2021-05-18T19:47:11.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 54 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
---
# bert-base-en-th-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-th-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-th-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-en-tr-cased | 2021-05-18T19:48:09.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 22 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
---
# bert-base-en-tr-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-tr-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-tr-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-en-uk-cased | 2021-05-18T19:49:13.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 10 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-uk-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-uk-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-uk-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-en-ur-cased | 2021-05-18T19:50:18.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 29 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
---
# bert-base-en-ur-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-ur-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-ur-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-en-vi-cased | 2021-05-18T19:51:36.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 29 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
---
# bert-base-en-vi-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-vi-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-vi-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-en-zh-cased | 2021-05-18T19:52:44.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 144 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
---
# bert-base-en-zh-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-zh-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-zh-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-en-zh-hi-cased | 2021-05-18T19:53:51.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 11 | transformers | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-zh-hi-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-zh-hi-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-zh-hi-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-es-cased | 2021-05-18T19:55:05.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"es",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 550 | transformers | ---
language: es
datasets: wikipedia
license: apache-2.0
---
# bert-base-es-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-es-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-es-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-fr-cased | 2021-05-18T19:56:16.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"fr",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 170 | transformers | ---
language: fr
datasets: wikipedia
license: apache-2.0
widget:
- text: "Paris est la [MASK] de la France."
- text: "Paris est la capitale de la [MASK]."
- text: "L'élection américaine a eu [MASK] en novembre 2020."
---
# bert-base-fr-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-fr-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-fr-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-hi-cased | 2021-05-18T19:57:34.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"hi",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 41 | transformers | ---
language: hi
datasets: wikipedia
license: apache-2.0
---
# bert-base-hi-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-hi-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-hi-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-it-cased | 2021-05-18T19:58:28.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"it",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 13 | transformers | ---
language: it
datasets: wikipedia
license: apache-2.0
---
# bert-base-it-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-it-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-it-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-ja-cased | 2021-05-18T19:59:21.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"ja",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 63 | transformers | ---
language: ja
datasets: wikipedia
license: apache-2.0
---
# bert-base-ja-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-ja-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-ja-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-lt-cased | 2021-05-18T20:01:07.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"lt",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 18 | transformers | ---
language: lt
datasets: wikipedia
license: apache-2.0
---
# bert-base-lt-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-lt-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-lt-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-nl-cased | 2021-05-18T20:02:19.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"nl",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 496 | transformers | ---
language: nl
datasets: wikipedia
license: apache-2.0
---
# bert-base-nl-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-nl-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-nl-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-no-cased | 2021-05-18T20:03:52.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"no",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 10 | transformers | ---
language: no
datasets: wikipedia
license: apache-2.0
---
# bert-base-no-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-no-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-no-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-pl-cased | 2021-05-18T20:05:45.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"pl",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 19 | transformers | ---
language: pl
datasets: wikipedia
license: apache-2.0
---
# bert-base-pl-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-pl-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-pl-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-pt-cased | 2021-05-18T20:06:41.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"pt",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 30 | transformers | ---
language: pt
datasets: wikipedia
license: apache-2.0
---
# bert-base-pt-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-pt-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-pt-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-ro-cased | 2021-05-18T20:08:29.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"ro",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 12 | transformers | ---
language: ro
datasets: wikipedia
license: apache-2.0
---
# bert-base-ro-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-ro-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-ro-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-ru-cased | 2021-05-18T20:09:38.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"ru",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 447 | transformers | ---
language: ru
datasets: wikipedia
license: apache-2.0
---
# bert-base-ru-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-ru-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-ru-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-sw-cased | 2021-05-18T20:10:30.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"sw",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 49 | transformers | ---
language: sw
datasets: wikipedia
license: apache-2.0
---
# bert-base-sw-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-sw-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-sw-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-th-cased | 2021-05-18T20:11:25.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"th",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 49 | transformers | ---
language: th
datasets: wikipedia
license: apache-2.0
---
# bert-base-th-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-th-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-th-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-tr-cased | 2021-05-18T20:12:30.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"tr",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 44 | transformers | ---
language: tr
datasets: wikipedia
license: apache-2.0
---
# bert-base-tr-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-tr-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-tr-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-uk-cased | 2021-05-18T20:13:29.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"uk",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 37 | transformers | ---
language: uk
datasets: wikipedia
license: apache-2.0
---
# bert-base-uk-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-uk-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-uk-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. |
Geotrend/bert-base-ur-cased | 2021-05-18T20:14:23.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"ur",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 25 | transformers | ---
language: ur
datasets: wikipedia
license: apache-2.0
---
# bert-base-ur-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-ur-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-ur-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-vi-cased | 2021-05-18T20:15:25.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"vi",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 47 | transformers | ---
language: vi
datasets: wikipedia
license: apache-2.0
---
# bert-base-vi-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-vi-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-vi-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-zh-cased | 2021-05-18T20:16:15.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"zh",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Geotrend | 77 | transformers | ---
language: zh
datasets: wikipedia
license: apache-2.0
---
# bert-base-zh-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-zh-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-zh-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Gev/test | 2021-03-12T19:49:41.000Z | [] | [
".gitattributes"
] | Gev | 0 | |||
Ghana-NLP/abena-base-akuapem-twi-cased | 2021-05-18T20:17:41.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"training_args.bin",
"vocab.txt"
] | Ghana-NLP | 16 | transformers | |
Ghana-NLP/abena-base-asante-twi-uncased | 2021-05-18T20:19:34.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"training_args.bin",
"vocab.txt"
] | Ghana-NLP | 18 | transformers | |
Ghana-NLP/distilabena-base-akuapem-twi-cased | 2020-10-22T06:04:27.000Z | [
"pytorch",
"tf",
"distilbert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tf_model.h5",
"training_args.bin",
"vocab.txt"
] | Ghana-NLP | 13 | transformers | |
Ghana-NLP/distilabena-base-asante-twi-uncased | 2020-10-22T06:19:21.000Z | [
"pytorch",
"tf",
"distilbert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tf_model.h5",
"training_args.bin",
"vocab.txt"
] | Ghana-NLP | 15 | transformers | |
Ghana-NLP/distilabena-base-v2-akuapem-twi-cased | 2020-10-22T06:08:50.000Z | [
"pytorch",
"tf",
"distilbert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tf_model.h5",
"training_args.bin",
"vocab.txt"
] | Ghana-NLP | 15 | transformers | |
Ghana-NLP/distilabena-base-v2-asante-twi-uncased | 2020-10-22T20:51:34.000Z | [
"pytorch",
"tf",
"distilbert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tf_model.h5",
"training_args.bin",
"vocab.txt"
] | Ghana-NLP | 19 | transformers | |
Ghana-NLP/robako-base-akuapem-twi-cased | 2021-05-20T11:53:32.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"tf_model.h5",
"train.log",
"training_args.bin",
"vocab.json"
] | Ghana-NLP | 17 | transformers | |
Ghana-NLP/robako-base-asante-twi-uncased | 2021-05-20T11:54:32.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"tf_model.h5",
"training_args.bin",
"vocab.json"
] | Ghana-NLP | 17 | transformers | |
Gint/my-awesome-model | 2021-05-25T16:23:49.000Z | [] | [
".gitattributes"
] | Gint | 0 | |||
Gint/your-model-name | 2021-05-30T14:15:37.000Z | [] | [
".gitattributes",
"README.md"
] | Gint | 0 | |||
GroNLP/bert-base-dutch-cased-frisian | 2021-05-18T20:20:35.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"fy",
"arxiv:2105.02855",
"transformers",
"BERTje",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | GroNLP | 11 | transformers | ---
language: fy
tags:
- BERTje
---
Wietse de Vries • Martijn Bartelds • Malvina Nissim • Martijn Wieling
# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High
This model is part of this paper + code:
- 📝 [Paper](https://arxiv.org/abs/2105.02855)
- 💻 [Code](https://github.com/wietsedv/low-resource-adapt)
## Models
The best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:
### Lexical layers
These models are identical to [BERTje](https://github.com/wietsedv/bertje), but with different lexical layers (`bert.embeddings.word_embeddings`).
- 🤗 [`GroNLP/bert-base-dutch-cased`](https://huggingface.co/GroNLP/bert-base-dutch-cased) (Dutch; source language)
- 🤗 [`GroNLP/bert-base-dutch-cased-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-gronings) (Gronings)
- 🤗 [`GroNLP/bert-base-dutch-cased-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-frisian) (West Frisian)
### POS tagging
These models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino) (Dutch)
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-gronings) (Gronings)
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-frisian) (West Frisian)
|
GroNLP/bert-base-dutch-cased-gronings | 2021-05-18T20:21:26.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"gos",
"arxiv:2105.02855",
"transformers",
"BERTje",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | GroNLP | 12 | transformers | ---
language: gos
tags:
- BERTje
---
Wietse de Vries • Martijn Bartelds • Malvina Nissim • Martijn Wieling
# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High
This model is part of this paper + code:
- 📝 [Paper](https://arxiv.org/abs/2105.02855)
- 💻 [Code](https://github.com/wietsedv/low-resource-adapt)
## Models
The best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:
### Lexical layers
These models are identical to [BERTje](https://github.com/wietsedv/bertje), but with different lexical layers (`bert.embeddings.word_embeddings`).
- 🤗 [`GroNLP/bert-base-dutch-cased`](https://huggingface.co/GroNLP/bert-base-dutch-cased) (Dutch; source language)
- 🤗 [`GroNLP/bert-base-dutch-cased-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-gronings) (Gronings)
- 🤗 [`GroNLP/bert-base-dutch-cased-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-frisian) (West Frisian)
### POS tagging
These models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino) (Dutch)
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-gronings) (Gronings)
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-frisian) (West Frisian)
|
GroNLP/bert-base-dutch-cased-upos-alpino-frisian | 2021-05-18T20:22:21.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"fy",
"arxiv:2105.02855",
"transformers",
"BERTje",
"pos"
] | token-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | GroNLP | 8 | transformers | ---
language: fy
tags:
- BERTje
- pos
---
Wietse de Vries • Martijn Bartelds • Malvina Nissim • Martijn Wieling
# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High
This model is part of this paper + code:
- 📝 [Paper](https://arxiv.org/abs/2105.02855)
- 💻 [Code](https://github.com/wietsedv/low-resource-adapt)
## Models
The best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:
### Lexical layers
These models are identical to [BERTje](https://github.com/wietsedv/bertje), but with different lexical layers (`bert.embeddings.word_embeddings`).
- 🤗 [`GroNLP/bert-base-dutch-cased`](https://huggingface.co/GroNLP/bert-base-dutch-cased) (Dutch; source language)
- 🤗 [`GroNLP/bert-base-dutch-cased-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-gronings) (Gronings)
- 🤗 [`GroNLP/bert-base-dutch-cased-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-frisian) (West Frisian)
### POS tagging
These models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino) (Dutch)
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-gronings) (Gronings)
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-frisian) (West Frisian)
|
GroNLP/bert-base-dutch-cased-upos-alpino-gronings | 2021-05-18T20:23:32.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"gos",
"arxiv:2105.02855",
"transformers",
"BERTje",
"pos"
] | token-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | GroNLP | 16 | transformers | ---
language: gos
tags:
- BERTje
- pos
---
Wietse de Vries • Martijn Bartelds • Malvina Nissim • Martijn Wieling
# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High
This model is part of this paper + code:
- 📝 [Paper](https://arxiv.org/abs/2105.02855)
- 💻 [Code](https://github.com/wietsedv/low-resource-adapt)
## Models
The best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:
### Lexical layers
These models are identical to [BERTje](https://github.com/wietsedv/bertje), but with different lexical layers (`bert.embeddings.word_embeddings`).
- 🤗 [`GroNLP/bert-base-dutch-cased`](https://huggingface.co/GroNLP/bert-base-dutch-cased) (Dutch; source language)
- 🤗 [`GroNLP/bert-base-dutch-cased-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-gronings) (Gronings)
- 🤗 [`GroNLP/bert-base-dutch-cased-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-frisian) (West Frisian)
### POS tagging
These models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino) (Dutch)
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-gronings) (Gronings)
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-frisian) (West Frisian)
|
GroNLP/bert-base-dutch-cased-upos-alpino | 2021-05-18T20:24:46.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"nl",
"arxiv:2105.02855",
"transformers",
"BERTje",
"pos"
] | token-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | GroNLP | 20 | transformers | ---
language: nl
tags:
- BERTje
- pos
---
Wietse de Vries • Martijn Bartelds • Malvina Nissim • Martijn Wieling
# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High
This model is part of this paper + code:
- 📝 [Paper](https://arxiv.org/abs/2105.02855)
- 💻 [Code](https://github.com/wietsedv/low-resource-adapt)
## Models
The best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:
### Lexical layers
These models are identical to [BERTje](https://github.com/wietsedv/bertje), but with different lexical layers (`bert.embeddings.word_embeddings`).
- 🤗 [`GroNLP/bert-base-dutch-cased`](https://huggingface.co/GroNLP/bert-base-dutch-cased) (Dutch; source language)
- 🤗 [`GroNLP/bert-base-dutch-cased-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-gronings) (Gronings)
- 🤗 [`GroNLP/bert-base-dutch-cased-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-frisian) (West Frisian)
### POS tagging
These models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino) (Dutch)
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-gronings) (Gronings)
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-frisian) (West Frisian)
|
GroNLP/bert-base-dutch-cased | 2021-05-18T20:25:40.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"nl",
"arxiv:1912.09582",
"transformers",
"BERTje",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | GroNLP | 28,874 | transformers | ---
language: nl
thumbnail: "https://raw.githubusercontent.com/wietsedv/bertje/master/bertje.png"
tags:
- BERTje
---
# BERTje: A Dutch BERT model
[Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) •
[Andreas van Cranenburgh](https://www.semanticscholar.org/author/Andreas-van-Cranenburgh/2791585) •
[Arianna Bisazza](https://www.semanticscholar.org/author/Arianna-Bisazza/3242253) •
[Tommaso Caselli](https://www.semanticscholar.org/author/Tommaso-Caselli/1864635) •
[Gertjan van Noord](https://www.semanticscholar.org/author/Gertjan-van-Noord/143715131) •
[Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475)
## Model description
BERTje is a Dutch pre-trained BERT model developed at the University of Groningen.
<img src="https://raw.githubusercontent.com/wietsedv/bertje/master/bertje.png" height="250">
For details, check out our paper on [arXiv](https://arxiv.org/abs/1912.09582), the code on [Github](https://github.com/wietsedv/bertje) and related work on [Semantic Scholar](https://www.semanticscholar.org/paper/BERTje%3A-A-Dutch-BERT-Model-Vries-Cranenburgh/a4d5e425cac0bf84c86c0c9f720b6339d6288ffa).
The paper and Github page mention fine-tuned models that are available [here](https://huggingface.co/wietsedv).
## How to use
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("GroNLP/bert-base-dutch-cased")
model = AutoModel.from_pretrained("GroNLP/bert-base-dutch-cased") # PyTorch
model = TFAutoModel.from_pretrained("GroNLP/bert-base-dutch-cased") # Tensorflow
```
## Benchmarks
The arXiv paper lists benchmarks. Here are a couple of comparisons between BERTje, multilingual BERT, BERT-NL and RobBERT that were done after writing the paper. Unlike some other comparisons, the fine-tuning procedures for these benchmarks are identical for each pre-trained model. You may be able to achieve higher scores for individual models by optimizing fine-tuning procedures.
More experimental results will be added to this page when they are finished. Technical details about how a fine-tuned these models will be published later as well as downloadable fine-tuned checkpoints.
All of the tested models are *base* sized (12) layers with cased tokenization.
Headers in the tables below link to original data sources. Scores link to the model pages that corresponds to that specific fine-tuned model. These tables will be updated when more simple fine-tuned models are made available.
### Named Entity Recognition
| Model | [CoNLL-2002](https://www.clips.uantwerpen.be/conll2002/ner/) | [SoNaR-1](https://ivdnt.org/downloads/taalmaterialen/tstc-sonar-corpus) | spaCy UD LassySmall |
| ---------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- |
| **BERTje** | [**90.24**](https://huggingface.co/wietsedv/bert-base-dutch-cased-finetuned-conll2002-ner) | [**84.93**](https://huggingface.co/wietsedv/bert-base-dutch-cased-finetuned-sonar-ner) | [86.10](https://huggingface.co/wietsedv/bert-base-dutch-cased-finetuned-udlassy-ner) |
| [mBERT](https://github.com/google-research/bert/blob/master/multilingual.md) | [88.61](https://huggingface.co/wietsedv/bert-base-multilingual-cased-finetuned-conll2002-ner) | [84.19](https://huggingface.co/wietsedv/bert-base-multilingual-cased-finetuned-sonar-ner) | [**86.77**](https://huggingface.co/wietsedv/bert-base-multilingual-cased-finetuned-udlassy-ner) |
| [BERT-NL](http://textdata.nl) | 85.05 | 80.45 | 81.62 |
| [RobBERT](https://github.com/iPieter/RobBERT) | 84.72 | 81.98 | 79.84 |
### Part-of-speech tagging
| Model | [UDv2.5 LassySmall](https://universaldependencies.org/treebanks/nl_lassysmall/index.html) |
| ---------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------- |
| **BERTje** | **96.48** |
| [mBERT](https://github.com/google-research/bert/blob/master/multilingual.md) | 96.20 |
| [BERT-NL](http://textdata.nl) | 96.10 |
| [RobBERT](https://github.com/iPieter/RobBERT) | 95.91 |
### BibTeX entry and citation info
```bibtex
@misc{devries2019bertje,
title = {{BERTje}: {A} {Dutch} {BERT} {Model}},
shorttitle = {{BERTje}},
author = {de Vries, Wietse and van Cranenburgh, Andreas and Bisazza, Arianna and Caselli, Tommaso and Noord, Gertjan van and Nissim, Malvina},
year = {2019},
month = dec,
howpublished = {arXiv:1912.09582},
url = {http://arxiv.org/abs/1912.09582},
}
```
|
GroNLP/gpt2-medium-dutch-embeddings | 2021-05-21T09:49:13.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"nl",
"arxiv:2012.05628",
"transformers",
"adaption",
"recycled",
"gpt2-medium",
"text-generation",
"pipeline_tag:text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
] | GroNLP | 48 | transformers | ---
language: nl
tags:
- adaption
- recycled
- gpt2-medium
pipeline_tag: text-generation
---
# GPT-2 recycled for Dutch (medium, adapted lexical embeddings)
[Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) •
[Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475)
## Model description
This model is based on the medium OpenAI GPT-2 ([`gpt2-medium`](https://huggingface.co/gpt2-medium)) model.
The Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for a Dutch vocabulary.
For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle).
## Related models
### Dutch
- [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings.
### Italian
- [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings.
## How to use
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="GroNLP/gpt2-medium-dutch-embeddings")
```
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-medium-dutch-embeddings")
model = AutoModel.from_pretrained("GroNLP/gpt2-medium-dutch-embeddings") # PyTorch
model = TFAutoModel.from_pretrained("GroNLP/gpt2-medium-dutch-embeddings") # Tensorflow
```
## BibTeX entry
```bibtex
@misc{devries2020good,
title={As good as new. How to successfully recycle English GPT-2 to make models for other languages},
author={Wietse de Vries and Malvina Nissim},
year={2020},
eprint={2012.05628},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
GroNLP/gpt2-medium-italian-embeddings | 2021-05-21T09:52:26.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"it",
"arxiv:2012.05628",
"transformers",
"adaption",
"recycled",
"gpt2-medium",
"text-generation",
"pipeline_tag:text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
] | GroNLP | 193 | transformers | ---
language: it
tags:
- adaption
- recycled
- gpt2-medium
pipeline_tag: text-generation
---
# GPT-2 recycled for Italian (medium, adapted lexical embeddings)
[Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) •
[Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475)
## Model description
This model is based on the medium OpenAI GPT-2 ([`gpt2-medium`](https://huggingface.co/gpt2-medium)) model.
The Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for an Italian vocabulary.
For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle).
## Related models
### Dutch
- [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings.
### Italian
- [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings.
## How to use
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="GroNLP/gpt2-medium-italian-embeddings")
```
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-medium-italian-embeddings")
model = AutoModel.from_pretrained("GroNLP/gpt2-medium-italian-embeddings") # PyTorch
model = TFAutoModel.from_pretrained("GroNLP/gpt2-medium-italian-embeddings") # Tensorflow
```
## BibTeX entry
```bibtex
@misc{devries2020good,
title={As good as new. How to successfully recycle English GPT-2 to make models for other languages},
author={Wietse de Vries and Malvina Nissim},
year={2020},
eprint={2012.05628},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
GroNLP/gpt2-small-dutch-embeddings | 2021-05-21T09:54:45.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"nl",
"arxiv:2012.05628",
"transformers",
"adaption",
"recycled",
"gpt2-small",
"text-generation",
"pipeline_tag:text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
] | GroNLP | 41 | transformers | ---
language: nl
tags:
- adaption
- recycled
- gpt2-small
pipeline_tag: text-generation
---
# GPT-2 recycled for Dutch (small, adapted lexical embeddings)
[Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) •
[Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475)
## Model description
This model is based on the small OpenAI GPT-2 ([`gpt2`](https://huggingface.co/gpt2)) model.
The Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for a Dutch vocabulary.
For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle).
## Related models
### Dutch
- [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings.
### Italian
- [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings.
## How to use
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="GroNLP/gpt2-small-dutch-embeddings")
```
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-small-dutch-embeddings")
model = AutoModel.from_pretrained("GroNLP/gpt2-small-dutch-embeddings") # PyTorch
model = TFAutoModel.from_pretrained("GroNLP/gpt2-small-dutch-embeddings") # Tensorflow
```
## BibTeX entry
```bibtex
@misc{devries2020good,
title={As good as new. How to successfully recycle English GPT-2 to make models for other languages},
author={Wietse de Vries and Malvina Nissim},
year={2020},
eprint={2012.05628},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
GroNLP/gpt2-small-dutch | 2021-05-21T09:55:47.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"nl",
"arxiv:2012.05628",
"transformers",
"adaption",
"recycled",
"gpt2-small",
"text-generation",
"pipeline_tag:text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
] | GroNLP | 513 | transformers | ---
language: nl
tags:
- adaption
- recycled
- gpt2-small
pipeline_tag: text-generation
---
# GPT-2 recycled for Dutch (small)
[Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) •
[Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475)
## Model description
This model is based on the small OpenAI GPT-2 ([`gpt2`](https://huggingface.co/gpt2)) model.
For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle).
## Related models
### Dutch
- [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings.
### Italian
- [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings.
## How to use
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="GroNLP/gpt2-small-dutch")
```
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-small-dutch")
model = AutoModel.from_pretrained("GroNLP/gpt2-small-dutch") # PyTorch
model = TFAutoModel.from_pretrained("GroNLP/gpt2-small-dutch") # Tensorflow
```
## BibTeX entry
```bibtex
@misc{devries2020good,
title={As good as new. How to successfully recycle English GPT-2 to make models for other languages},
author={Wietse de Vries and Malvina Nissim},
year={2020},
eprint={2012.05628},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
GroNLP/gpt2-small-italian-embeddings | 2021-05-21T09:57:57.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"it",
"arxiv:2012.05628",
"transformers",
"adaption",
"recycled",
"gpt2-small",
"text-generation",
"pipeline_tag:text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
] | GroNLP | 36 | transformers | ---
language: it
tags:
- adaption
- recycled
- gpt2-small
pipeline_tag: text-generation
---
# GPT-2 recycled for Italian (small, adapted lexical embeddings)
[Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) •
[Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475)
## Model description
This model is based on the small OpenAI GPT-2 ([`gpt2`](https://huggingface.co/gpt2)) model.
The Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for an Italian vocabulary.
For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle).
## Related models
### Dutch
- [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings.
### Italian
- [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings.
## How to use
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="GroNLP/gpt2-small-italian-embeddings")
```
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-small-italian-embeddings")
model = AutoModel.from_pretrained("GroNLP/gpt2-small-italian-embeddings") # PyTorch
model = TFAutoModel.from_pretrained("GroNLP/gpt2-small-italian-embeddings") # Tensorflow
```
## BibTeX entry
```bibtex
@misc{devries2020good,
title={As good as new. How to successfully recycle English GPT-2 to make models for other languages},
author={Wietse de Vries and Malvina Nissim},
year={2020},
eprint={2012.05628},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
GroNLP/gpt2-small-italian | 2021-05-21T09:58:53.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"it",
"arxiv:2012.05628",
"transformers",
"adaption",
"recycled",
"gpt2-small",
"text-generation",
"pipeline_tag:text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
] | GroNLP | 725 | transformers | ---
language: it
tags:
- adaption
- recycled
- gpt2-small
pipeline_tag: text-generation
---
# GPT-2 recycled for Italian (small)
[Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) •
[Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475)
## Model description
This model is based on the small OpenAI GPT-2 ([`gpt2`](https://huggingface.co/gpt2)) model.
For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle).
## Related models
### Dutch
- [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings.
### Italian
- [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings.
## How to use
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="GroNLP/gpt2-small-italian")
```
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-small-italian")
model = AutoModel.from_pretrained("GroNLP/gpt2-small-italian") # PyTorch
model = TFAutoModel.from_pretrained("GroNLP/gpt2-small-italian") # Tensorflow
```
## BibTeX entry
```bibtex
@misc{devries2020good,
title={As good as new. How to successfully recycle English GPT-2 to make models for other languages},
author={Wietse de Vries and Malvina Nissim},
year={2020},
eprint={2012.05628},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Grossmend/rudialogpt3_medium_based_on_gpt2 | 2021-05-21T10:03:46.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"ru",
"transformers",
"convAI",
"conversational",
"text-generation"
] | conversational | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | Grossmend | 1,801 | transformers | ---
language:
- ru
thumbnail:
tags:
- convAI
- conversational
---
DialoGPT on Russian language
Article on Habr: https://habr.com/ru/company/icl_services/blog/548244/
#### How to use
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Grossmend/rudialogpt3_medium_based_on_gpt2")
model = AutoModelForCausalLM.from_pretrained("Grossmend/rudialogpt3_medium_based_on_gpt2")
def get_length_param(text: str) -> str:
tokens_count = len(tokenizer.encode(text))
if tokens_count <= 15:
len_param = '1'
elif tokens_count <= 50:
len_param = '2'
elif tokens_count <= 256:
len_param = '3'
else:
len_param = '-'
return len_param
for step in range(5):
input_user = input("===> User:")
# encode the new user input, add parameters and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(f"|0|{get_length_param(input_user)}|" + input_user + tokenizer.eos_token + "|1|1|", return_tensors="pt")
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response
chat_history_ids = model.generate(
bot_input_ids,
num_return_sequences=1,
max_length=512,
no_repeat_ngram_size=3,
do_sample=True,
top_k=50,
top_p=0.9,
temperature = 0.6,
mask_token_id=tokenizer.mask_token_id,
eos_token_id=tokenizer.eos_token_id,
unk_token_id=tokenizer.unk_token_id,
pad_token_id=tokenizer.pad_token_id,
device='cpu',
)
# pretty print last ouput tokens from bot
print(f"===> RuDialoGPT: {tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)}")
``` |
Guo/bert | 2020-12-22T12:02:06.000Z | [] | [
".gitattributes"
] | Guo | 0 | |||
GuoLiyong/snowfall_model_zoo | 2021-05-27T09:26:31.000Z | [] | [
".gitattributes",
"README.md",
"exp/kamo-naoyuki/librispeech_asr_train_asr_conformer6_n_fft512_hop_length256_raw_en_bpe5000_scheduler_confwarmup_steps40000_optim_conflr0.0025_sp_valid.acc.ave/config.yaml",
"exp/kamo-naoyuki/librispeech_asr_train_asr_conformer6_n_fft512_hop_length256_raw_en_bpe5000_scheduler_confwarmup_steps40000_optim_conflr0.0025_sp_valid.acc.ave/valid.acc.ave_10best.pth",
"exp/kamo-naoyuki/librispeech_asr_train_asr_conformer6_n_fft512_hop_length256_raw_en_bpe5000_scheduler_confwarmup_steps40000_optim_conflr0.0025_sp_valid.acc.ave/en_token_list/bpe_unigram5000/bpe.model",
"exp/kamo-naoyuki/librispeech_asr_train_asr_conformer6_n_fft512_hop_length256_raw_en_bpe5000_scheduler_confwarmup_steps40000_optim_conflr0.0025_sp_valid.acc.ave/lm/config.yaml",
"exp/kamo-naoyuki/librispeech_asr_train_asr_conformer6_n_fft512_hop_length256_raw_en_bpe5000_scheduler_confwarmup_steps40000_optim_conflr0.0025_sp_valid.acc.ave/lm/valid.loss.ave_10best.pth"
] | GuoLiyong | 0 | The original link of these models is:
https://zenodo.org/record/4604066#.YKtNrqgzZPY
which is accessible by espnet utils
The are ported to this repo for users who don't have espnet dependencies.
|
||
Guscode/DKbert-hatespeech-detection | 2021-05-26T14:24:48.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"Danish",
"dataset:DKHate - OffensEval2020",
"transformers",
"Hatespeech",
"BERT",
"license:mit"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | Guscode | 73 | transformers | ---
language:
- Danish
tags:
- Hatespeech
- Danish
- BERT
license: MIT
datasets:
- DKHate - OffensEval2020
Classes:
- Hateful
- Not Hateful
---
# DKbert-hatespeech-classification
Use this model to detect hatespeech in Danish. For details, guide and command line tool see [DK hate github](https://github.com/Guscode/DKbert-hatespeech-detection)
## Training data
Training data is from OffensEval2020 which can be found [here]( https://figshare.com/articles/dataset/Danish_Hate_Speech_Abusive_Language_data/12220805)
## Performance
The model achieves a macro F1-score of 0.78
Precision hateful: 0.77
Recall hateful: 0.49
See more on [DK hate github](https://github.com/Guscode/DKbert-hatespeech-detection)
## Training procedure
- [BOTXO Nordic Bert](https://huggingface.co/DJSammy/bert-base-danish-uncased_BotXO,ai)
- Learning rate: 1e-5,
- Batch size: 16
- Max sequence length: 128
## Project information
This model was made in collaboration between [Johan Horsmans](https://github.com/JohanHorsmans) and [Gustav Aarup Lauridsen](https://github.com/Guscode) for their Cultural Data Science Exam.
|
Guscode/trying | 2021-05-07T09:49:56.000Z | [] | [
".gitattributes"
] | Guscode | 0 | |||
HHousen/distil-led-large-cnn-16384 | 2021-02-02T00:58:07.000Z | [
"pytorch",
"led",
"seq2seq",
"en",
"dataset:cnn_dailymail",
"transformers",
"license:apache-2.0",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | HHousen | 1,488 | transformers | ---
language: en
datasets:
- cnn_dailymail
license: apache-2.0
---
## DistilLED Large CNN 16384
*distil-led-large-cnn-16384* was initialized from [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6), in a fashion similar to [allenai/led-large-16384](https://huggingface.co/allenai/led-large-16384).
To be able to process 16K tokens, *sshleifer/distilbart-cnn-12-6*'s position embedding matrix was simply copied 16 times.
This checkpoint should be loaded into `LEDForConditionalGeneration.from_pretrained`. See the [LED documentation](https://huggingface.co/transformers/model_doc/led.html) for more information. |
HJK/PickupLineGenerator | 2021-05-21T10:05:21.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | HJK | 19 | transformers | basically, it makes pickup lines
https://huggingface.co/gpt2
|
HScomcom/gpt2-MyLittlePony | 2021-05-21T10:09:36.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | HScomcom | 20 | transformers | The model that generates the My little pony script
Fine tuning data: [Kaggle](https://www.kaggle.com/liury123/my-little-pony-transcript?select=clean_dialog.csv)
API page: [Ainize](https://ainize.ai/fpem123/GPT2-MyLittlePony)
Demo page: [End point](https://master-gpt2-my-little-pony-fpem123.endpoint.ainize.ai/)
### Model information
Base model: gpt-2 large
Epoch: 30
Train runtime: 4943.9641 secs
Loss: 0.0291
###===Teachable NLP===
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp)
Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
|
HScomcom/gpt2-fairytales | 2021-05-21T10:16:43.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | HScomcom | 27 | transformers | ### Model information
Fine tuning data: https://www.kaggle.com/cuddlefish/fairy-tales
License: CC0: Public Domain
Base model: gpt-2 large
Epoch: 30
Train runtime: 17861.6048 secs
Loss: 0.0412
API page: [Ainize](https://ainize.ai/fpem123/GPT2-FairyTales?branch=master)
Demo page: [End-point](https://master-gpt2-fairy-tales-fpem123.endpoint.ainize.ai/)
### ===Teachable NLP=== ###
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp)
Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
And my other fairytale model: [showcase](https://forum.ainetwork.ai/t/teachable-nlp-gpt-2-fairy-tales/68) |
HScomcom/gpt2-friends | 2021-05-21T10:22:41.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | HScomcom | 8 | transformers | |
HScomcom/gpt2-game-of-thrones | 2021-05-21T10:28:34.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | HScomcom | 10 | transformers | |
HScomcom/gpt2-lovecraft | 2021-05-21T10:38:11.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | HScomcom | 23 | transformers | ### Model information
Fine tuning data: https://www.kaggle.com/bennijesus/lovecraft-fiction
License: CC0: Public Domain
Base model: gpt-2 large
Epoch: 30
Train runtime: 10307.3488 secs
Loss: 0.0292
API page: [Ainize](https://ainize.ai/fpem123/GPT2-LoveCraft?branch=master)
Demo page: [End-point](https://master-gpt2-love-craft-fpem123.endpoint.ainize.ai/)
### ===Teachable NLP===
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp)
Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
And my other lovecraft model: [showcase](https://forum.ainetwork.ai/t/teachable-nlp-gpt-2-lovecraft/71) |
HScomcom/gpt2-theoffice | 2021-05-21T10:44:51.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | HScomcom | 13 | transformers | |
Hadron/DialoGPT-medium-nino | 2021-06-04T20:30:21.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"conversational",
"text-generation"
] | conversational | [
".gitattributes",
"README.md",
"READfME.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
] | Hadron | 33 | transformers | ---
tags:
- conversational
---
# My Awesome Model |
Han11/test | 2021-04-13T10:30:19.000Z | [] | [
".gitattributes"
] | Han11 | 0 | |||
HarjyotSahni/personal | 2021-02-23T04:50:15.000Z | [] | [
".gitattributes"
] | HarjyotSahni | 0 | |||
Harshal/transformers | 2021-05-18T17:52:31.000Z | [] | [
".gitattributes"
] | Harshal | 0 | |||
Harveenchadha/testmodel | 2021-04-24T05:57:54.000Z | [] | [
".gitattributes",
"README.md"
] | Harveenchadha | 0 | |||
Hasan/Test | 2021-03-12T16:10:54.000Z | [] | [
".gitattributes"
] | Hasan | 0 | |||
Hate-speech-CNERG/bert-base-uncased-hatexplain-rationale-two | 2021-05-25T09:53:57.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:hatexplain",
"transformers",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | Hate-speech-CNERG | 263 | transformers | ---
language: en
license: apache-2.0
datasets:
- hatexplain
---
The model is used for classifying a text as Abusive (Hatespeech and Offensive) or Normal. The model is trained using data from Gab and Twitter and Human Rationales were included as part of the training data to boost the performance. The model also has a rationale predictor head that can predict the rationales given an abusive sentence.
The dataset and models are available here: https://github.com/punyajoy/HateXplainkfk
**For more details about our paper**
Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee "[HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection)". Accepted at AAAI 2021.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{mathew2020hatexplain,
title={HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection},
author={Mathew, Binny and Saha, Punyajoy and Yimam, Seid Muhie and Biemann, Chris and Goyal, Pawan and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2012.10289},
year={2020}
}
~~~
|
Hate-speech-CNERG/bert-base-uncased-hatexplain | 2021-05-25T09:53:05.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"dataset:hatexplain",
"transformers",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | Hate-speech-CNERG | 564 | transformers | ---
language: en
license: apache-2.0
datasets:
- hatexplain
---
The model is used for classifying a text as **Hatespeech**, **Offensive**, or **Normal**. The model is trained using data from Gab and Twitter and *Human Rationales* were included as part of the training data to boost the performance.
The dataset and models are available here: https://github.com/punyajoy/HateXplain
**For more details about our paper**
Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee "[HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection)". Accepted at AAAI 2021.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{mathew2020hatexplain,
title={HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection},
author={Mathew, Binny and Saha, Punyajoy and Yimam, Seid Muhie and Biemann, Chris and Goyal, Pawan and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2012.10289},
year={2020}
}
~~~
|
Hate-speech-CNERG/dehatebert-mono-arabic | 2021-05-18T20:27:50.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:2004.06465",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | Hate-speech-CNERG | 70 | transformers | This model is used detecting **hatespeech** in **Arabic language**. The mono in the name refers to the monolingual setting, where the model is trained using only Arabic language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.877609 for a learning rate of 2e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
Hate-speech-CNERG/dehatebert-mono-english | 2021-05-18T20:29:09.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:2004.06465",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | Hate-speech-CNERG | 1,813 | transformers | This model is used detecting **hatespeech** in **English language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.726030 for a learning rate of 2e-5. Training code can be found here https://github.com/punyajoy/DE-LIMIT
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
Hate-speech-CNERG/dehatebert-mono-french | 2021-05-18T20:30:33.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:2004.06465",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | Hate-speech-CNERG | 75 | transformers | This model is used detecting **hatespeech** in **French language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.692094 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
Hate-speech-CNERG/dehatebert-mono-german | 2021-05-18T20:31:46.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:2004.06465",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | Hate-speech-CNERG | 39 | transformers | This model is used detecting **hatespeech** in **German language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.649794 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
Hate-speech-CNERG/dehatebert-mono-indonesian | 2021-05-18T20:33:24.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:2004.06465",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | Hate-speech-CNERG | 135 | transformers | This model is used detecting **hatespeech** in **Indonesian language**. The mono in the name refers to the monolingual setting, where the model is trained using only Arabic language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.844494 for a learning rate of 2e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
Hate-speech-CNERG/dehatebert-mono-italian | 2021-05-18T20:34:46.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:2004.06465",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | Hate-speech-CNERG | 40 | transformers | This model is used detecting **hatespeech** in **Italian language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.837288 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
Hate-speech-CNERG/dehatebert-mono-polish | 2021-05-18T20:36:13.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:2004.06465",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | Hate-speech-CNERG | 18 | transformers | This model is used detecting **hatespeech** in **Polish language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.723254 for a learning rate of 2e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
Hate-speech-CNERG/dehatebert-mono-portugese | 2021-05-18T20:38:14.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:2004.06465",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | Hate-speech-CNERG | 30 | transformers | This model is used detecting **hatespeech** in **Portuguese language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.716119 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
Hate-speech-CNERG/dehatebert-mono-spanish | 2021-05-18T20:40:59.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:2004.06465",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | Hate-speech-CNERG | 40 | transformers | This model is used detecting **hatespeech** in **Spanish language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.740287 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
Hate-speech-CNERG/deoffxlmr-mono-kannada | 2021-05-25T07:58:37.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json"
] | Hate-speech-CNERG | 25 | transformers | This model is used to detect **Offensive Content** in **Kannada Code-Mixed language**. The mono in the name refers to the monolingual setting, where the model is trained using only Kannada(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss.
This model is the best of multiple trained for **EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages**. Genetic-Algorithm based ensembled test predictions got the second-highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.73, Ensemble - 0.74)
### For more details about our paper
Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "[Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection](https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38/)".
***Please cite our paper in any published work that uses any of these resources.***
~~~
@inproceedings{saha-etal-2021-hate,
title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection",
author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh",
booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
month = apr,
year = "2021",
address = "Kyiv",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38",
pages = "270--276",
abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {``}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.",
}
~~~ |
Hate-speech-CNERG/deoffxlmr-mono-malyalam | 2021-05-25T08:03:09.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json"
] | Hate-speech-CNERG | 27 | transformers | This model is used to detect **Offensive Content** in **Malayalam Code-Mixed language**. The mono in the name refers to the monolingual setting, where the model is trained using only Malayalam(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss.
This model is the best of multiple trained for **EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages**. Genetic-Algorithm based ensembled test predictions got the highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.97, Ensemble - 0.97)
### For more details about our paper
Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "[Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection](https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38/)".
***Please cite our paper in any published work that uses any of these resources.***
~~~
@inproceedings{saha-etal-2021-hate,
title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection",
author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh",
booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
month = apr,
year = "2021",
address = "Kyiv",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38",
pages = "270--276",
abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {``}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.",
}
~~~ |
Hate-speech-CNERG/deoffxlmr-mono-tamil | 2021-05-25T07:50:31.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json"
] | Hate-speech-CNERG | 22 | transformers | This model is used to detect **Offensive Content** in **Tamil Code-Mixed language**. The mono in the name refers to the monolingual setting, where the model is trained using only Tamil(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss.
This model is the best of multiple trained for **EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages**. Genetic-Algorithm based ensembled test predictions got the highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.76, Ensemble - 0.78)
### For more details about our paper
Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "[Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection](https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38/)".
***Please cite our paper in any published work that uses any of these resources.***
~~~
@inproceedings{saha-etal-2021-hate,
title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection",
author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh",
booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
month = apr,
year = "2021",
address = "Kyiv",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38",
pages = "270--276",
abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {``}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.",
}
~~~ |
HelloRusk/t5-base-parasci | 2021-04-29T11:28:58.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] | HelloRusk | 8 | transformers | |
HelloRusk/t5-small-parasci | 2021-04-29T11:56:21.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] | HelloRusk | 7 | transformers | |
Helsinki-NLP/opus-mt-NORTH_EU-NORTH_EU | 2021-01-18T07:45:37.000Z | [
"pytorch",
"marian",
"seq2seq",
"multilingual",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 70 | transformers | ---
tags:
- translation
---
### opus-mt-NORTH_EU-NORTH_EU
* source languages: de,nl,fy,af,da,fo,is,no,nb,nn,sv
* target languages: de,nl,fy,af,da,fo,is,no,nb,nn,sv
* OPUS readme: [de+nl+fy+af+da+fo+is+no+nb+nn+sv-de+nl+fy+af+da+fo+is+no+nb+nn+sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de+nl+fy+af+da+fo+is+no+nb+nn+sv-de+nl+fy+af+da+fo+is+no+nb+nn+sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/de+nl+fy+af+da+fo+is+no+nb+nn+sv-de+nl+fy+af+da+fo+is+no+nb+nn+sv/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de+nl+fy+af+da+fo+is+no+nb+nn+sv-de+nl+fy+af+da+fo+is+no+nb+nn+sv/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de+nl+fy+af+da+fo+is+no+nb+nn+sv-de+nl+fy+af+da+fo+is+no+nb+nn+sv/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.sv | 48.1 | 0.663 |
|
Helsinki-NLP/opus-mt-ROMANCE-en | 2021-01-18T07:45:42.000Z | [
"pytorch",
"rust",
"marian",
"seq2seq",
"roa",
"en",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"rust_model.ot",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 24,865 | transformers | ---
tags:
- translation
---
### opus-mt-ROMANCE-en
* source languages: fr,fr_BE,fr_CA,fr_FR,wa,frp,oc,ca,rm,lld,fur,lij,lmo,es,es_AR,es_CL,es_CO,es_CR,es_DO,es_EC,es_ES,es_GT,es_HN,es_MX,es_NI,es_PA,es_PE,es_PR,es_SV,es_UY,es_VE,pt,pt_br,pt_BR,pt_PT,gl,lad,an,mwl,it,it_IT,co,nap,scn,vec,sc,ro,la
* target languages: en
* OPUS readme: [fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en/README.md)
* dataset: opus
* model: transformer
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-04-01.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en/opus-2020-04-01.zip)
* test set translations: [opus-2020-04-01.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en/opus-2020-04-01.test.txt)
* test set scores: [opus-2020-04-01.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en/opus-2020-04-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.en | 62.2 | 0.750 |
|
Helsinki-NLP/opus-mt-SCANDINAVIA-SCANDINAVIA | 2021-01-18T07:45:47.000Z | [
"pytorch",
"marian",
"seq2seq",
"scandinavia",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 54 | transformers | ---
tags:
- translation
---
### opus-mt-SCANDINAVIA-SCANDINAVIA
* source languages: da,fo,is,no,nb,nn,sv
* target languages: da,fo,is,no,nb,nn,sv
* OPUS readme: [da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.da.sv | 69.2 | 0.811 |
|
Helsinki-NLP/opus-mt-aav-en | 2021-01-18T07:45:52.000Z | [
"pytorch",
"marian",
"seq2seq",
"vi",
"km",
"aav",
"en",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 59 | transformers | ---
language:
- vi
- km
- aav
- en
tags:
- translation
license: apache-2.0
---
### aav-eng
* source group: Austro-Asiatic languages
* target group: English
* OPUS readme: [aav-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aav-eng/README.md)
* model: transformer
* source language(s): hoc hoc_Latn kha khm khm_Latn mnw vie vie_Hani
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/aav-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aav-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aav-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.hoc-eng.hoc.eng | 0.3 | 0.095 |
| Tatoeba-test.kha-eng.kha.eng | 1.0 | 0.115 |
| Tatoeba-test.khm-eng.khm.eng | 8.9 | 0.271 |
| Tatoeba-test.mnw-eng.mnw.eng | 0.8 | 0.118 |
| Tatoeba-test.multi.eng | 24.8 | 0.391 |
| Tatoeba-test.vie-eng.vie.eng | 38.7 | 0.567 |
### System Info:
- hf_name: aav-eng
- source_languages: aav
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aav-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['vi', 'km', 'aav', 'en']
- src_constituents: {'mnw', 'vie', 'kha', 'khm', 'vie_Hani', 'khm_Latn', 'hoc_Latn', 'hoc'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/aav-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/aav-eng/opus2m-2020-07-31.test.txt
- src_alpha3: aav
- tgt_alpha3: eng
- short_pair: aav-en
- chrF2_score: 0.391
- bleu: 24.8
- brevity_penalty: 0.968
- ref_len: 36693.0
- src_name: Austro-Asiatic languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: aav
- tgt_alpha2: en
- prefer_old: False
- long_pair: aav-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |