Howdy

#2
by sin2piusc - opened

I'm curious as to what methods you are using to create your data set. Thanks :)

Most of the time, when I scrape audio from a new visual novel I have to write a new python script that does the job. But before I can do that I have to decrypt the game's voice and script files using tools such as KrkrExtract, arc unpacker and GARbro.

joujiboi changed discussion status to closed

I do very much like your work and I love what your doing. :)

However, I have been getting some aberrant results from training models with your dataset.. What I had/have to do is remove the last 10% of your dataset to get my scripts to process it (94.6% respectively).
I've just defaulted to using : ds = load_dataset('joujiboi/japanese-anime-speech', split='train[:90%]') when processing your data set. Which is fine. No worries. Just thought you should know.

Also, I would love it if you could take a look at some results I have using your set explicitly on medium whisper:
https://huggingface.co/sin2piusc/whisper-medium-anime-5k/tensorboard

The metrics are super noisy.

Depending on the optimizer decay etc adding your dataset reduces transcription in practice -sometimes-. I test every model (have hundreds) by first running a video like " 'The Widow' 「やもめ」 2018 - 1080" as a base.. if it passes then I go and test it on more pop culture type videos.

Sign up or log in to comment