metadata
language: en
tags:
- azbert
- pretraining
- fill-mask
widget:
- text: $f$ $($ $x$ [MASK] $y$ $)$
example_title: mathy
- text: $x$ [MASK] $x$ $equal$ $2$ $x$
example_title: mathy
- text: Proof by [MASK] that $n$ $fact$ $gt$ $3$ $n$ for $n$ $gt$ $6$
example_title: mathy
- text: Proof by induction that $n$ [MASK] $gt$ $3$ $n$ for $n$ $gt$ $6$
example_title: mathy
- text: The goal of life is [MASK].
example_title: philosophical
license: mit
About
This repository is a boilerplate to push a mask-filling model to the HuggingFace Model Hub.
Upload to huggingface
Download your tokenizer, model checkpoints, and optionally the training logs () to the ./ckpt
directory.
Optionally, test model using the MLM task:
pip install pya0
python test.py --test_file test.txt
Note
Modify the test examples intest.txt
to play with it. The test file is tab-separated, the first column is additional positions you want to mask for the right-side sentence (useful for masking tokens in math markups). A zero means no additional mask positions.
To upload to huggingface, use the upload2hgf.sh
script.
Before runnig this script, be sure to check:
git-lfs
is installed- having git-remote named
hgf
reference tohttps://huggingface.co/your/repo
- model contains all the files needed:
config.json
andpytorch_model.bin
- tokenizer contains all the files needed:
added_tokens.json
,special_tokens_map.json
,tokenizer_config.json
,vocab.txt
andtokenizer.json
- no
tokenizer_file
field intokenizer_config.json
(sometimes it is located locally at~/.cache
)