Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
artyomboyko
/
whisper-small-fine_tuned-ru
like
2
Automatic Speech Recognition
Transformers
PyTorch
TensorBoard
Safetensors
mozilla-foundation/common_voice_13_0
whisper
Generated from Trainer
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Metrics
Training metrics
Community
3
Train
Deploy
Use this model
3235a27
whisper-small-fine_tuned-ru
3 contributors
History:
43 commits
artyomboyko
Training in progress, step 2000
3235a27
about 1 year ago
runs
Training in progress, step 2000
about 1 year ago
.gitattributes
1.52 kB
initial commit
over 1 year ago
.gitignore
13 Bytes
Training in progress, step 500
about 1 year ago
README.md
1.97 kB
Update README.md
about 1 year ago
added_tokens.json
2.08 kB
Upload tokenizer
over 1 year ago
config.json
1.31 kB
Training in progress, step 500
about 1 year ago
generation_config.json
3.83 kB
Upload WhisperForConditionalGeneration
over 1 year ago
merges.txt
494 kB
Upload tokenizer
over 1 year ago
model.safetensors
967 MB
LFS
Adding `safetensors` variant of this model (#1)
about 1 year ago
normalizer.json
52.7 kB
Upload tokenizer
over 1 year ago
preprocessor_config.json
339 Bytes
Upload processor
over 1 year ago
pytorch_model.bin
pickle
Detected Pickle imports (3)
"torch._utils._rebuild_tensor_v2"
,
"torch.FloatStorage"
,
"collections.OrderedDict"
What is a pickle import?
967 MB
LFS
Training in progress, step 2000
about 1 year ago
special_tokens_map.json
2.08 kB
Upload tokenizer
over 1 year ago
tokenizer_config.json
805 Bytes
Upload tokenizer
over 1 year ago
training_args.bin
pickle
Detected Pickle imports (8)
"transformers.training_args_seq2seq.Seq2SeqTrainingArguments"
,
"transformers.trainer_utils.IntervalStrategy"
,
"transformers.trainer_utils.SchedulerType"
,
"torch.device"
,
"accelerate.state.PartialState"
,
"transformers.training_args.OptimizerNames"
,
"accelerate.utils.dataclasses.DistributedType"
,
"transformers.trainer_utils.HubStrategy"
How to fix it?
4.09 kB
LFS
Training in progress, step 500
about 1 year ago
vocab.json
1.04 MB
Upload tokenizer
over 1 year ago