Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
RASMUS
/
wav2vec2-xlsr-fi-lm-1B
like
1
Automatic Speech Recognition
Transformers
PyTorch
Finnish
wav2vec2
Generated from Trainer
robust-speech-event
hf-asr-leaderboard
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Community
1
Train
Deploy
Use this model
refs/pr/1
wav2vec2-xlsr-fi-lm-1B
3 contributors
History:
17 commits
librarian-bot
Librarian Bot: Add base_model information to model
28a6f12
about 1 year ago
language_model
Remove arpa file
almost 3 years ago
.gitattributes
1.18 kB
initial commit
almost 3 years ago
.gitignore
13 Bytes
Training in progress, step 400
almost 3 years ago
README.md
2.52 kB
Librarian Bot: Add base_model information to model
about 1 year ago
added_tokens.json
23 Bytes
add tokenizer
almost 3 years ago
alphabet.json
233 Bytes
Upload lm-boosted decoder
almost 3 years ago
config.json
2.08 kB
Training in progress, step 400
almost 3 years ago
eval.py
5.05 kB
add evaluation notebook and scripts
almost 3 years ago
preprocessor_config.json
262 Bytes
Update preprocessor_config.json
almost 3 years ago
pytorch_model.bin
pickle
Detected Pickle imports (3)
"torch.FloatStorage"
,
"torch._utils._rebuild_tensor_v2"
,
"collections.OrderedDict"
What is a pickle import?
3.85 GB
LFS
End of training
almost 3 years ago
run_eval.sh
126 Bytes
add evaluation notebook and scripts
almost 3 years ago
run_evaluations_on_common_voice_test_7_0.ipynb
110 kB
add evaluation notebook and scripts
almost 3 years ago
special_tokens_map.json
695 Bytes
Upload lm-boosted decoder
almost 3 years ago
tokenizer_config.json
287 Bytes
Upload lm-boosted decoder
almost 3 years ago
training_args.bin
pickle
Detected Pickle imports (6)
"transformers.training_args.OptimizerNames"
,
"transformers.trainer_utils.HubStrategy"
,
"transformers.trainer_utils.SchedulerType"
,
"transformers.training_args.TrainingArguments"
,
"torch.device"
,
"transformers.trainer_utils.IntervalStrategy"
How to fix it?
3.06 kB
LFS
Training in progress, step 400
almost 3 years ago
vocab.json
307 Bytes
add tokenizer
almost 3 years ago