Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
flax-community
/
gpt2-medium-indonesian
like
5
Text Generation
Transformers
PyTorch
JAX
TensorBoard
Indonesian
gpt2
text-generation-inference
Inference Endpoints
Model card
Files
Files and versions
Metrics
Training metrics
Community
6
Train
Deploy
Use this model
83f2ae2
gpt2-medium-indonesian
4 contributors
History:
57 commits
Galuh
Add bias analysis
83f2ae2
about 3 years ago
text_collection
added text collection
about 3 years ago
.gitattributes
737 Bytes
save checkpoint after 2000 steps
about 3 years ago
.gitignore
7 Bytes
remove wandb, add gitignore
about 3 years ago
README.md
9.37 kB
Add bias analysis
about 3 years ago
added_tokens.json
24 Bytes
add tokenizers files
about 3 years ago
config.json
864 Bytes
Saving weights and logs of step 10000
about 3 years ago
create_config.py
257 Bytes
add config
about 3 years ago
create_tokenizer.py
748 Bytes
add config
about 3 years ago
events.out.tfevents.1625840127.t1v-n-528d9406-w-0.245719.3.v2
1.44 kB
LFS
Saving weights and logs of step 100
about 3 years ago
events.out.tfevents.1625843003.t1v-n-528d9406-w-0.250031.3.v2
2.95 MB
LFS
remove wandb, add gitignore
about 3 years ago
events.out.tfevents.1625892207.t1v-n-528d9406-w-0.296755.3.v2
9.65 MB
LFS
Saving weights and logs of step 65000
about 3 years ago
flax_model.msgpack
1.42 GB
LFS
model udpate
about 3 years ago
jax2torch.py
311 Bytes
update jax converter
about 3 years ago
merges.txt
467 kB
add tokenizers files
about 3 years ago
pytorch_model.bin
pickle
Detected Pickle imports (4)
"collections.OrderedDict"
,
"torch._utils._rebuild_tensor_v2"
,
"torch.FloatStorage"
,
"torch.ByteStorage"
What is a pickle import?
1.44 GB
LFS
model udpate
about 3 years ago
run_clm_flax.py
28.4 kB
udpated the model and script to load local data
about 3 years ago
run_pretraining.sh
992 Bytes
udpated the model and script to load local data
about 3 years ago
special_tokens_map.json
90 Bytes
add tokenizers files
about 3 years ago
tokenizer.json
1.38 MB
add tokenizer
about 3 years ago
tokenizer_config.json
207 Bytes
add tokenizers files
about 3 years ago
vocab.json
808 kB
add tokenizers files
about 3 years ago