Hugging Face
Models
Datasets
Spaces
Docs
Solutions
Pricing
Log In
Sign Up
bertin-project
/
bertin-roberta-base-spanish
like
28
Fill-Mask
Transformers
PyTorch
JAX
TensorBoard
Safetensors
bertin-project/mc4-es-sampled
Spanish
roberta
spanish
Inference Endpoints
arxiv:
2107.07253
arxiv:
1907.11692
License:
cc-by-4.0
Model card
Files
Files and versions
Metrics
Training metrics
Community
1
Train
Deploy
Use in Transformers
4cad39c
bertin-roberta-base-spanish
8 contributors
History:
47 commits
versae
Merge branch 'main' of https://huggingface.co/flax-community/bertin-roberta-large-spanish into main
4cad39c
about 2 years ago
configs
Changed and added vocab and tokenizer
about 2 years ago
mc4
Fixes to mc4 fork
about 2 years ago
.gitattributes
736 Bytes
Update .gitattributes
about 2 years ago
.gitignore
1.84 kB
Initial test with BETO's corpus
over 2 years ago
README.md
1.84 kB
Fixed widget example
about 2 years ago
config.json
618 Bytes
Fix config for checkpoint
about 2 years ago
config.py
256 Bytes
Preparing code for final runs
about 2 years ago
convert.py
876 Bytes
Improved version of conversion script Flax → PyTorch
about 2 years ago
flax_model.msgpack
499 MB
LFS
Model at 182k steps, mlm acc 0.6494
about 2 years ago
get_embeddings_and_perplexity.py
1.53 kB
Add script to generate dataset of embeddings and perplexities. Add script to generate t-SNE plot for embedding and perplexity visualization.
about 2 years ago
merges.txt
505 kB
Changed and added vocab and tokenizer
about 2 years ago
perplexity.py
751 Bytes
Adding checkpointing, wandb, and new mlm script
about 2 years ago
pytorch_model.bin
pickle
Detected Pickle imports (4)
"torch.FloatStorage"
,
"torch.LongStorage"
,
"torch._utils._rebuild_tensor_v2"
,
"collections.OrderedDict"
What is a pickle import?
499 MB
LFS
Model at 182k steps, mlm acc 0.6494
about 2 years ago
run.sh
883 Bytes
Adding base config and organizing configs
about 2 years ago
run_mlm_flax.py
30 kB
Adding sampling to mc4
about 2 years ago
run_mlm_flax_stream.py
30.8 kB
Adding pad_to_multiple_of=16
about 2 years ago
run_stream.sh
932 Bytes
Preparing code for final runs
about 2 years ago
special_tokens_map.json
239 Bytes
Changed and added vocab and tokenizer
about 2 years ago
tokenizer.json
1.45 MB
Changed and added vocab and tokenizer
about 2 years ago
tokenizer_config.json
292 Bytes
Changed and added vocab and tokenizer
about 2 years ago
tokens.py
649 Bytes
Scripts for perplexity sampling and fixes
about 2 years ago
tokens.py.orig
899 Bytes
Adjust batch size for extrating tokens
about 2 years ago
tsne_plot.py
3.02 kB
Remove unused imports
about 2 years ago
vocab.json
846 kB
Changed and added vocab and tokenizer
about 2 years ago