Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
artyomboyko
/
whisper-small-fine_tuned-ru
like
2
Automatic Speech Recognition
Transformers
PyTorch
TensorBoard
Safetensors
mozilla-foundation/common_voice_13_0
whisper
Generated from Trainer
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Metrics
Training metrics
Community
3
Train
Deploy
Use this model
1c69bbf
whisper-small-fine_tuned-ru
3 contributors
History:
25 commits
artyomboyko
SFconvertbot
Adding `safetensors` variant of this model (
#1
)
1c69bbf
11 months ago
runs
Training in progress, step 4500
11 months ago
.gitattributes
1.52 kB
initial commit
11 months ago
.gitignore
13 Bytes
Training in progress, step 500
11 months ago
README.md
8.38 kB
Update README.md
11 months ago
added_tokens.json
2.08 kB
Upload tokenizer
11 months ago
config.json
1.32 kB
Upload WhisperForConditionalGeneration
11 months ago
generation_config.json
3.83 kB
Upload WhisperForConditionalGeneration
11 months ago
merges.txt
494 kB
Upload tokenizer
11 months ago
model.safetensors
967 MB
LFS
Adding `safetensors` variant of this model (#1)
11 months ago
normalizer.json
52.7 kB
Upload tokenizer
11 months ago
preprocessor_config.json
339 Bytes
Upload processor
11 months ago
pytorch_model.bin
pickle
Detected Pickle imports (3)
"torch.FloatStorage"
,
"collections.OrderedDict"
,
"torch._utils._rebuild_tensor_v2"
What is a pickle import?
967 MB
LFS
Upload WhisperForConditionalGeneration
11 months ago
special_tokens_map.json
2.08 kB
Upload tokenizer
11 months ago
tokenizer_config.json
805 Bytes
Upload tokenizer
11 months ago
training_args.bin
pickle
Detected Pickle imports (8)
"accelerate.utils.dataclasses.DistributedType"
,
"accelerate.state.PartialState"
,
"transformers.trainer_utils.HubStrategy"
,
"transformers.training_args_seq2seq.Seq2SeqTrainingArguments"
,
"transformers.training_args.OptimizerNames"
,
"transformers.trainer_utils.IntervalStrategy"
,
"torch.device"
,
"transformers.trainer_utils.SchedulerType"
How to fix it?
4.09 kB
LFS
Training in progress, step 500
11 months ago
vocab.json
1.04 MB
Upload tokenizer
11 months ago