Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
aapot
/
wav2vec2-xlsr-1b-finnish-v2
like
0
Automatic Speech Recognition
Transformers
PyTorch
mozilla-foundation/common_voice_7_0
Finnish
wav2vec2
finnish
Generated from Trainer
hf-asr-leaderboard
robust-speech-event
Eval Results
Inference Endpoints
arxiv:
2111.09296
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
4ce7b23
wav2vec2-xlsr-1b-finnish-v2
2 contributors
History:
15 commits
aapot
Add test scores
4ce7b23
over 2 years ago
.gitattributes
1.18 kB
initial commit
over 2 years ago
.gitignore
13 Bytes
Training in progress, step 500
over 2 years ago
README.md
5.42 kB
Add test scores
over 2 years ago
added_tokens.json
23 Bytes
add tokenizer
over 2 years ago
config.json
2.08 kB
Training in progress, step 500
over 2 years ago
eval.py
5.46 kB
Add test scores
over 2 years ago
log_mozilla-foundation_common_voice_7_0_fi_test_predictions.txt
90.1 kB
Add test scores
over 2 years ago
log_mozilla-foundation_common_voice_7_0_fi_test_targets.txt
90.3 kB
Add test scores
over 2 years ago
mozilla-foundation_common_voice_7_0_fi_test_eval_results.txt
50 Bytes
Add test scores
over 2 years ago
preprocessor_config.json
214 Bytes
Training in progress, step 500
over 2 years ago
pytorch_model.bin
pickle
Detected Pickle imports (3)
"torch._utils._rebuild_tensor_v2"
,
"torch.FloatStorage"
,
"collections.OrderedDict"
What is a pickle import?
3.85 GB
LFS
Add 27500 step model
over 2 years ago
run_eval.sh
145 Bytes
Add test scores
over 2 years ago
special_tokens_map.json
309 Bytes
add tokenizer
over 2 years ago
tokenizer_config.json
260 Bytes
add tokenizer
over 2 years ago
training_args.bin
pickle
Detected Pickle imports (6)
"transformers.trainer_utils.IntervalStrategy"
,
"transformers.training_args.TrainingArguments"
,
"transformers.trainer_utils.SchedulerType"
,
"transformers.trainer_utils.HubStrategy"
,
"torch.device"
,
"transformers.training_args.OptimizerNames"
How to fix it?
3.06 kB
LFS
Training in progress, step 500
over 2 years ago
vocab.json
298 Bytes
add tokenizer
over 2 years ago