Seems like transcriptions have non Hebrew letters and some audios are too short

#2
by Yehor - opened

Hello!

We, audio researchers, discovered yours project and discussed it in our ASR community here - https://t.me/speech_recognition_uk

One person have found that the dataset contains non Hebrew letters in transcriptions and he said some audios are too short for the ASR task because of Silero VAD (it's known to us that the tool does not work well on some audios).

Could you, please, review yours data and make it better?

In any way, we appreciate you have made this contribution to the voicetech and we want to help make it better.

Toda :)

I assume that's @tarasfrompir , correct?

If so, I'm providing the same reply I provided him below.
In general, I am not an NLP/ML person, and most of the work focused on first getting a dataset under a permissive license.

I took Silero VAD as it appeared to be recommended.
If you can recommend a higher-quality VAD solution (that is legal to use), I am more than happy to rerun VAD, transcriptions (we're going to re-transcribe with whisper-large-v2 anyways soon, hopefully with a dataset >>3300 hours), and upload v2 of this dataset.

As an aside, if you can share about what you're hoping to achieve with this dataset, we'll be happy to know.
It may help us focus on what types of data we're trying to collect next.


Original reply to @tarasfrompir :

Hi,

  1. I have manually checked a few files <1 second, and they appeared to contain valid audio/words. If you point me at specific issues, I'm happy to investigate more.
  2. Transcription was unfortunately done with whisper-small; we are planning to rerun it with whisper-large-v2, and later do some of it manually; the 'quality' column in the transcriptions dataset (currently '1' for all) refers to how good that transcription is.
  3. audio-base (the dataset again which this question is raised) contains the raw audio files; they are complete of course, not post-VAD.
  4. If you can provide more specific guidelines about your needs (min/max segment sizes, other requests), that would help.

At the end of the day, we felt base+VAD are mostly good quality, transcriptions is low quality and will be improved in v2 (hopefully a few weeks).
As an aside, our understanding is that there are usages where transcripts are not necessary.

  1. https://huggingface.co/pyannote/voice-activity-detection - not bed.
  2. whisper medium model - transcribes the text quite well, but on the condition that you filter additionally with some other solution
  3. I saw it, but it's easier to download TV channels from YouTube, or something like that
  4. If dataset make for ASR -
    But everything is not so, according to the first time coding, according to the other, the multiple gaps in the names of the folders, not saving the reduction of the yogo obrobka, on the third - extra characters in the transcription from the English to other symbols, well, the quality of the transcription - the fluctuations of the whole banished and visperom, that is not a great model, but like a small one, a quarter of a time for pivsecond files, it’s also a nasty zatiya. I've created a model the other day for wax, then I've created such a picture that I need to analyze audio with presence, I want two words and more.
    sorry for my google translate

This is my take on the matter. Not mandatory.

  1. https://huggingface.co/pyannote/voice-activity-detection - not bed.
  2. whisper medium model - transcribes the text quite well, but on the condition that you filter additionally with some other solution
  3. I saw it, but it's easier to download TV channels from YouTube, or something like that
  4. If dataset make for ASR -
    But everything is not so, according to the first time coding, according to the other, the multiple gaps in the names of the folders, not saving the reduction of the yogo obrobka, on the third - extra characters in the transcription from the English to other symbols, well, the quality of the transcription - the fluctuations of the whole banished and visperom, that is not a great model, but like a small one, a quarter of a time for pivsecond files, it’s also a nasty zatiya. I've created a model the other day for wax, then I've created such a picture that I need to analyze audio with presence, I want two words and more.
    sorry for my google translate

This is my take on the matter. Not mandatory.

Hi,

  1. From what I could figure out online, pyannote is considered less optimal than silero-vad, and requires significantly more processing power.
    Alternatively, I can rerun silero-vad with arguments instructing it to provide segments >=2 seconds and <= 30 seconds; is that good enough?
  2. I believe if we do #1 and rerun with whisper-large-v2, that would be significantly better. Is that correct?
    We can apply filtering later on, but we will probably need much less of that.
  3. The reason audio-base is included is since every one of these files was downloaded from a 'legal' source we have permissions for. Sometimes going with YT etc is illegal for some uses.
    I agree most users will never access audio-base.
  4. It is a bit hard for me to understand that comment due to translation, sorry. If you write it in your native language and tell me what it is (hopefully russian/ukrainian/... -> something I can ask a fellow engineer in Israel to look at ;)) I'll decode that and try to answer.

We are very enthusiastic to make this a usable dataset; please continue making suggestions.
Were you able to make any use of the dataset, or is it irrelevant for now?

I will write to you in Ukrainian. And you will translate later - it will be better that way.

  1. silero-vad - має в своїй основі декілька недоробок, особливо, коли працює з шиплячими звуками, а іврит наскільки я розумію - використовує такі звуки дуже часто.
    Тому я запропонував вам pyannote. Але ви робите цей датасет, вам і робити вибір з чим працювати.Можливо спробуйте ось це - https://github.com/egorsmkv/fsmn-vad-demo
  2. Різниця між моделями large-v2 і medium не дуже велика, але в вимозі до памяті відеокарти дуже велика. Рекомендую використовувати fastwhisper. Особисто я , використовува його. Трохи швидше і практично той же результат. Тому вирішувати теж вам.
    2.1. Я маю модель розпізнавання для івриту реалізовану на VOSK-API, але вибачте я можу нею тільки користуватись. Передавати, чи продавати її не маю права. Ця модель вирішує одночасно дуже багато питань. Вона виступає в ролі VAD, і в ролі застосунку для анотації одночасно. Причому обробляє грязне аудіо в рази швидше ніж whisper. Якщо зацікавитесь такою моделлю, то в принципі, я думаю що можна якось вирішити це питання. Якшо зацікавитесь пишіть мені на пошту, я буду лише посередником в даному питанні. Або можна просто замовити серверні потужності для обробки цього аудіо.
  3. Так я з вами згоден, але є деякий перелік каналів, які мають ліцензію на свій контент Creative Commons. Тому я не бачу тут ніяких порушень правил, чи будь-яких умов вкористання даних матеріалів.
  4. Ось тут я висказав свою точку зору на ваш датасет. https://t.me/speech_recognition_uk/27265. Ласкаво просимо до нас в групу... Щоб не дублювати його смисл тут, залишаю вам посилання на нього. АЛЕ, зрозумійте, коли я побачив датасет, для вільного використання, то я трохи зрадів. Але вже через день , після того як його було скачано і переведено в необхідний формат датасету для мене, то я трішки висказав те що, думав. Не ображайтесь , якщо , щось не так.
    Ну і побажання для Вас - Швидко весь датасет переробити, зробити його самим класним на світі, оскільки це фактично перший вільнодоступний датасет.
    ПС. Цікаво поспілкуватися з розумною людиною...
    Вибачте за мою українську мову. Але якщо вірно я Вас зрозумів, то це оптимальний варіант спілкування.

Examples
Найдовший файл, який отримав. Але то його поділити треба по словам

תחילת ספטמבר עשרים עשרים ושניים בתקתוק היו לה אז משהו כמו שישים אלף עוקבים נכון לזמן הפרק רגע לפני הבחירות היא כמעט הכפיל את הכמות עם למעלה ממאה ועשר אלף עוקבים על פניו הצלחה פנומנלית לפחות מבחינה שיווקית ונוסיף לזה לפי הפוסטים שלה היא גם התארסה ממש עכשיו נראה שהכל
Звичайно - якість не дуже , але то проста порізка без контролю якості.

מה זה כוכב סופר נובה

כנראה השפיעו אותר על התנהלות העניינים

Thanks for taking the time to reply.
I'll comment inline - will be happy to hear what you think.

I am not an expert in this domain, and any feedback we get is extremely useful; if I ask something multiple times it does not mean I disagree.
As an aside, my Ukrainian friend apparently doesn't read Ukrainian so good anymore, so I took my best stab with translation with Google Translate :)

Thanks for taking the time to answer my questions; hopefully in another 1-2 iterations we'll have something that's nice enough for everyone, and useful for you.

I will write to you in Ukrainian. And you will translate later - it will be better that way.

  1. silero-vad - має в своїй основі декілька недоробок, особливо, коли працює з шиплячими звуками, а іврит наскільки я розумію - використовує такі звуки дуже часто.
    Тому я запропонував вам pyannote. Але ви робите цей датасет, вам і робити вибір з чим працювати.Можливо спробуйте ось це - https://github.com/egorsmkv/fsmn-vad-demo

pyannote is extremely compute-intensive. It requires a GPU-grade accelerator, whereas Silero only uses 1 CPU core.
Before going with the more heavyweight solution, let's try redoing silero-vad with a 1-second min, 30-second max setting.
I'm trying that out and will make sure our next release (hopefully with significantly more data - our final target is 10k hours) will be better.

  1. Різниця між моделями large-v2 і medium не дуже велика, але в вимозі до памяті відеокарти дуже велика. Рекомендую використовувати fastwhisper. Особисто я , використовува його. Трохи швидше і практично той же результат. Тому вирішувати теж вам.

I will give fasterwhisper another try.
My first run with it showed mediocre results; I am not sure if this is due to a problem in the way I ran things, or the fact our chunks are short.
Another challenge is that I found out about an augmented whisper with better Hebrew support (https://huggingface.co/Shiry/whisper-large-v2-he) - not sure I know how to make fasterwhisper work with that.

I will probably start focusing on that again very soon - 1-2 weeks.

2.1. Я маю модель розпізнавання для івриту реалізовану на VOSK-API, але вибачте я можу нею тільки користуватись. Передавати, чи продавати її не маю права. Ця модель вирішує одночасно дуже багато питань. Вона виступає в ролі VAD, і в ролі застосунку для анотації одночасно. Причому обробляє грязне аудіо в рази швидше ніж whisper. Якщо зацікавитесь такою моделлю, то в принципі, я думаю що можна якось вирішити це питання. Якшо зацікавитесь пишіть мені на пошту, я буду лише посередником в даному питанні. Або можна просто замовити серверні потужності для обробки цього аудіо.

Interesting. I will write you personally, at least to learn how this model came into existence.
Separately, I'm trying to get a large corporate to provide basic funding - if that happens, it will be easier to use the more expensive hardware.

  1. Так я з вами згоден, але є деякий перелік каналів, які мають ліцензію на свій контент Creative Commons. Тому я не бачу тут ніяких порушень правил, чи будь-яких умов вкористання даних матеріалів.

That's true.
Our main goal is to make sure corporates - who may have little interest in Hebrew - get everything "ready-made" so they can just apply it and be done.
For that, we need everything solved and wrapped up nicely.

We have some interesting data sources coming up soon - I hope we'll get them in time for our next release, ideally early/mid September.

  1. Ось тут я висказав свою точку зору на ваш датасет. https://t.me/speech_recognition_uk/27265. Ласкаво просимо до нас в групу... Щоб не дублювати його смисл тут, залишаю вам посилання на нього. АЛЕ, зрозумійте, коли я побачив датасет, для вільного використання, то я трохи зрадів. Але вже через день , після того як його було скачано і переведено в необхідний формат датасету для мене, то я трішки висказав те що, думав. Не ображайтесь , якщо , щось не так.
    Ну і побажання для Вас - Швидко весь датасет переробити, зробити його самим класним на світі, оскільки це фактично перший вільнодоступний датасет.

I went through parts of the thread and translated messages one by one.
It is good I have a few good Ukrainian friends, so I was not offended. ;)

I did not understand your comment about spacing and names.
In our dataset, 'source', 'episode' are pretty much the source name, and specific episode name, as we downloaded it.
Is this what troubled you in the layout? or was it something else?

If you can provide an example and rationale, that would help make v2 nicer to use (and also say why this matters - again, I'm not an ML/DS guy, and have no idea how these datasets are best structured).

ПС. Цікаво поспілкуватися з розумною людиною...
Вибачте за мою українську мову. Але якщо вірно я Вас зрозумів, то це оптимальний варіант спілкування.

Examples
Найдовший файл, який отримав. Але то його поділити треба по словам

תחילת ספטמבר עשרים עשרים ושניים בתקתוק היו לה אז משהו כמו שישים אלף עוקבים נכון לזמן הפרק רגע לפני הבחירות היא כמעט הכפיל את הכמות עם למעלה ממאה ועשר אלף עוקבים על פניו הצלחה פנומנלית לפחות מבחינה שיווקית ונוסיף לזה לפי הפוסטים שלה היא גם התארסה ממש עכשיו נראה שהכל

  • I'm not sure I understand the problem.
    This is 20 seconds, and the transcribed text is mostly correct.
    I was told by some people who I believed are with relevant experience that up to 30 seconds is good, and that they do not required full time<->text alignment for such a time period.
    Are you expecting a shorter period? same period but accurate time per-word? something else?

Звичайно - якість не дуже , але то проста порізка без контролю якості.

מה זה כוכב סופר נובה

  • The audio here has good quality, and a good transcription.
    At least for me as a native Hebrew speaker it sounds good and clear

Can you explain the issue?

כנראה השפיעו אותר על התנהלות העניינים

  • Same.

This example - is the result of the VOSK-API model.

Since Hebrew is spoken very quickly, the duration of the files in 30 seconds contains a lot of words, which will greatly affect the quality of the transcription of the text, which in itself will then have a bad effect on the quality of the model being created.

If the file names contain spaces and other non-standard characters, then this can cause errors when creating the model. A lot depends on what framework you will create the model with. It just helps to avoid unexpected errors.

I have a friend in Israel, he does something like a call center. I can't really describe his work. But with this funding, 5 or 6 versions of the model for Vosk-API were created. I created a dataset for the model. That's why I'm so interested in your dataset. But for some reason, in my opinion, very few people are pushing this technology into open access, which was very surprising.

Any implementation of whisper is a good audio transcription tool. Therefore, which option you choose will be the result.

I am also not an expert in this area, just trying to share my experience.

Google Translate is the easiest option to communicate in different languages.

@tarasfrompir sorry for the delayed response.

Let me do some research around this and be back to you.
We're planning on releasing a v2 of this dataset in the near future, and I will try to integrate your comments into it.

I sent you an example of the operation of a speech recognition server in a telegram - on the basis of which you can make almost all your wishes.

Why not using nvidia nemo vad?

It’s always like this, we started well, but there’s no end in sight...

How is the creation of the dataset going?

We're actually making good progress.
I had a few personal events that delayed me - sorry for not replying earlier.

Practically:

  1. We continued collecting data, and now have >10k hours under license.
  2. We will soon perform vad with new settings (min: 2s, max: 30s). Not yet sure if Silero or Nemo. Once that is done, we'll release that dataset.
  3. We're making progress on a plan to manually transcribe. Hoping to have more to share mid October.

Why not using nvidia nemo vad?

I am unaware of specific benchmarks showing Nemo has an edge, and Silero's CPU-only mode is very fast.

  1. We continued collecting data, and now have >10k hours under license.

Where can I download this data, or you can share it, I’ve sped up my hardware a little, I can and want to experiment with your data.

Are you happy with the raw data (pre-vad)?

We are now preparing the VAD data, and will later start with whisper large.

If the raw data is good enough, I can upload it in the next few days.

Yes , need raw data, I will try to process your raw data with my tools

@tarasfrompir the audio-base dataset has just been updated and is now >10k hours.
Splits and transcripts are still work-in-progress.

@tarasfrompir the audio-base dataset has just been updated and is now >10k hours.
Splits and transcripts are still work-in-progress.

I'm already downloading your updated version

If you publish any software based on it, will be nice to get a link/credits :)

If you publish any software based on it, will be nice to get a link/credits :)

This is for amateur use only.
I don't make any products for business.
If, nevertheless, this is used somewhere, then a link to your project will be required.

@benderrodriguez Hey, can you please share about progress in manual transcriptions?
Thank you!

Yes.

  1. We reran ~3000 hours with new VAD settings and whisper large-v2.
    Looks much better.

  2. We started manual transcriptions using crowd-sourcing about a week ago.
    Currently have ~10 hours of manually-corrected content.

Not sure when to upload the data yet (10 hours? 100? 3000?)

I think that 50-100 hours are a good start.. that’s will allow some improvement in whisper fine tuning. There is also 50~ hours of kan11 and google fluers (in huggingface), so it sums to 100-150 already (even though it’s not punctuated).
Is the manual transcription punctuated as whisper does? (I guess yes if you’re using large-v2 as a base to manual transcription)

It is punctuated, though I do not believe punctuation quality is on the same level as transcriptions.

I guess it’s not required to be as precise as transcription
Thanks!

@benderrodriguez
Maybe you can even open a new repo in huggingface, and separate the manual transcriptions from the automatic until you have 100% of it.

Can you please update? Do you have already 100 hours of manual transcriptions? more than that?
According to the progress you see until now, can you please share your expectations about when do you plan to have 3000 hours of manual transcriptions or even 100% of it? If it's possible to set expectations about that.
Thank you 🙏

ivrit.ai org

Can you tell me more about your interests?

As for our status:

  1. We are now preparing v3 of our dataset. It has:
    • 13k hours of raw audio
    • post-VAD content, made up of segments 2-29 seconds in size
    • Whisper v3 transcriptions for all content
  2. We have ~70 hours of labeled data, and will upload it as well (tagged).
  3. Hoping to start model training soon.
ivrit.ai org

Would be nice if you can contact me and explain how you're planning on sing this.

I've send a connection with a message in LinkedIn.

ivrit.ai org

@tarasfrompir new version coming out this week.
It will have:

  1. ~8100 post-VAD hours of audio.
  2. VAD settings of 2-29 seconds per segment (no mini segments any more).
  3. Nicer filenames, clearly indicating the sources (though not perfect).
  4. whisper-large-v2 auto-transcription.

Soon afterwards, we're planning on releasing our first set of transcriptions for >80 hours, manually transcribed.

t’s very good that you are making progress and I am happy to follow the developments.

I experimented with Silero VAD and realized that it works VERY slowly on large files. It will cut a file one hour long very slowly. I don't think you have enough of these files. This is probably why you are moving so slowly in this direction. And, of course, it’s good to process files with Whisper, but it also doesn’t work very quickly. I use a recognition server based on Kaldi, it processes all this an order of faster, and is also not very demanding on resources. I can offer my own recognition service for testing once again. It processes about 150-200 threads at the same time, of course even more can be done, but then I’m not sure that the recognition speed will be the same. I am processing 100 threads, but the computer simply cannot handle a lot of data reading and writing. The quality is certainly not very good, but the filtering can be adjusted to your liking. And if you do post-work with Whisper, it will be absolutely wonderful - because then you can compare the results and once again weed out bad records.
If you are interested in trying this service, please contact me.

Regarding manual marking, I would first mark it automatically, and then only manually carry out verification and filter out the excess.

I have already collected about 5 thousand hours of already filtered dataset, albeit from my own data, which I use only for research purposes. I cannot post it publicly due to the lack of rights to the source material. If somewhere it was possible to download your source material, say through FTP, or a file sharing service, then I would try to do something with it.

ivrit.ai org

t’s very good that you are making progress and I am happy to follow the developments.

I experimented with Silero VAD and realized that it works VERY slowly on large files. It will cut a file one hour long very slowly. I don't think you have enough of these files. This is probably why you are moving so slowly in this direction. And, of course, it’s good to process files with Whisper, but it also doesn’t work very quickly. I use a recognition server based on Kaldi, it processes all this an order of faster, and is also not very demanding on resources. I can offer my own recognition service for testing once again. It processes about 150-200 threads at the same time, of course even more can be done, but then I’m not sure that the recognition speed will be the same. I am processing 100 threads, but the computer simply cannot handle a lot of data reading and writing. The quality is certainly not very good, but the filtering can be adjusted to your liking. And if you do post-work with Whisper, it will be absolutely wonderful - because then you can compare the results and once again weed out bad records.
If you are interested in trying this service, please contact me.

Regarding manual marking, I would first mark it automatically, and then only manually carry out verification and filter out the excess.

A few notes/thoughts:

  1. When you say Silero VAD is slow, can you translate that to #-s vs other solutions?
    I am running it as 4 jobs in parallel, and had multiple unexpected process kills.

I debugged this last week and found that for large files, it does resampling to 16kHz inefficiently - causing a memory spike which will make jobs slow (swap) or kill them (OOM).
Solution is merged: https://github.com/snakers4/silero-vad/pull/421

Separately, I am doing the split outside Silero using ffmpeg; that also required dedicated code to handle.
You can view it here: https://github.com/yairl/ivrit.ai/blob/9d5527078755f60c7279cd025e1ddb2fb979b79b/process.py#L79

With these changes, handling ~10k hours of audio took ~2 days, 4 cores in parallel.

  1. I am running whisper on all snippets as a baseline for manual transcription.
    It's running on an AWS g5 machine, and assuming it will take ~1 week for the ~5000 post-VAD hours I currently have.

  2. Our system's flow does Raw->VAD->Whisper->Manual.
    All data goes through this; for each of the ~80 hours transcribed hours we have the original audio and offsets in it; the post-VAD audio; whisper transcription, including statistics such as avg_logprob etc; and a manual transcription.
    ~50% of the segments have whisper errors that needed correcting.

ivrit.ai org

I have already collected about 5 thousand hours of already filtered dataset, albeit from my own data, which I use only for research purposes. I cannot post it publicly due to the lack of rights to the source material. If somewhere it was possible to download your source material, say through FTP, or a file sharing service, then I would try to do something with it.

The audio-base repo is updated with all ~13k pre-vad audio hours.
If you prefer an easier download interface, I can connect you to our dagshub repo.

I have already collected about 5 thousand hours of already filtered dataset, albeit from my own data, which I use only for research purposes. I cannot post it publicly due to the lack of rights to the source material. If somewhere it was possible to download your source material, say through FTP, or a file sharing service, then I would try to do something with it.

The audio-base repo is updated with all ~13k pre-vad audio hours.
If you prefer an easier download interface, I can connect you to our dagshub repo.

Please, if you can do that

ivrit.ai org

I have already collected about 5 thousand hours of already filtered dataset, albeit from my own data, which I use only for research purposes. I cannot post it publicly due to the lack of rights to the source material. If somewhere it was possible to download your source material, say through FTP, or a file sharing service, then I would try to do something with it.

The audio-base repo is updated with all ~13k pre-vad audio hours.
If you prefer an easier download interface, I can connect you to our dagshub repo.

Please, if you can do that

https://dagshub.com/ivrit.ai/audio-raw

Let me know if that works.
Same ivrit.ai license as here.

This is a demonstration of a dataset based on processing your data.
Did I write everything correctly about the license?

https://huggingface.co/datasets/tarasfrompir/Ivrit.ai-based

ivrit.ai org

This is a demonstration of a dataset based on processing your data.
Did I write everything correctly about the license?

https://huggingface.co/datasets/tarasfrompir/Ivrit.ai-based

This is not a cc-by license, but an augmented ivrit.ai license.
FYI ,we just uploaded a new version of this repo with ~3300 hours of whisper-large-v2 transcribed audio.

benderrodriguez changed discussion status to closed

Sign up or log in to comment