Commit
•
cffe859
1
Parent(s):
06d5b0a
Update README.md
Browse files
README.md
CHANGED
@@ -16,11 +16,11 @@ This repository provides tools for detecting and counting the number of speakers
|
|
16 |
|
17 |
The pre-trained system processes audio inputs to detect the presence and count of speakers, providing output in the following format:
|
18 |
|
19 |
-
|
20 |
0.00-2.50 has 1 speaker
|
21 |
2.50-4.20 has 2 speakers
|
22 |
4.20-5:30 has no speakers
|
23 |
-
|
24 |
|
25 |
The system expects input recordings sampled at 16kHz. If your signal has a different sample rate, resample it using torchaudio before using the interface.
|
26 |
|
@@ -83,16 +83,15 @@ Please note that these models are evaluated under controlled conditions and thei
|
|
83 |
This project includes interfaces for processing audio files, detecting speech, and counting speakers. It leverages pre-trained models and custom scripts for refining and aggregating predictions.
|
84 |
|
85 |
### Installation
|
86 |
-
|
87 |
pip install speechbrain
|
88 |
-
|
89 |
-
'''
|
90 |
Please look at [SpeechBrain](https://speechbrain.github.io/) for tutorials or more information.
|
91 |
|
92 |
## Using the Speaker Counter Interface
|
93 |
### For XVector & ECAPA-TDNN:
|
94 |
|
95 |
-
|
96 |
from interface.SpeakerCounter import SpeakerCounter
|
97 |
|
98 |
wav_path = "path/to/your/audio.wav"
|
@@ -104,10 +103,10 @@ speaker_counter = SpeakerCounter.from_hparams(source=model_path, savedir=save_di
|
|
104 |
|
105 |
# Run inference
|
106 |
speaker_counter.classify_file(wav_path)
|
107 |
-
|
108 |
|
109 |
### For a Self-supervised model:
|
110 |
-
|
111 |
from interface.SpeakerCounterSelfsupervisedMLP import SpeakerCounter
|
112 |
|
113 |
wav_path = "path/to/your/audio.wav"
|
@@ -119,7 +118,7 @@ audio_classifier = SpeakerCounter.from_hparams(source=model_path, savedir=save_d
|
|
119 |
|
120 |
# Run inference
|
121 |
audio_classifier.classify_file(wav_path)
|
122 |
-
|
123 |
|
124 |
|
125 |
## **Setup and Training Instructions**
|
@@ -182,7 +181,7 @@ To train the SelfSupervised XVector model run the following command.
|
|
182 |
### **This Project was developed completely using [SpeechBrain](https://speechbrain.github.io/) **
|
183 |
|
184 |
## Reference
|
185 |
-
|
186 |
@misc{speechbrain,
|
187 |
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
|
188 |
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
|
@@ -192,4 +191,4 @@ To train the SelfSupervised XVector model run the following command.
|
|
192 |
primaryClass={eess.AS},
|
193 |
note={arXiv:2106.04624}
|
194 |
}
|
195 |
-
|
|
|
16 |
|
17 |
The pre-trained system processes audio inputs to detect the presence and count of speakers, providing output in the following format:
|
18 |
|
19 |
+
```
|
20 |
0.00-2.50 has 1 speaker
|
21 |
2.50-4.20 has 2 speakers
|
22 |
4.20-5:30 has no speakers
|
23 |
+
```
|
24 |
|
25 |
The system expects input recordings sampled at 16kHz. If your signal has a different sample rate, resample it using torchaudio before using the interface.
|
26 |
|
|
|
83 |
This project includes interfaces for processing audio files, detecting speech, and counting speakers. It leverages pre-trained models and custom scripts for refining and aggregating predictions.
|
84 |
|
85 |
### Installation
|
86 |
+
```
|
87 |
pip install speechbrain
|
88 |
+
```
|
|
|
89 |
Please look at [SpeechBrain](https://speechbrain.github.io/) for tutorials or more information.
|
90 |
|
91 |
## Using the Speaker Counter Interface
|
92 |
### For XVector & ECAPA-TDNN:
|
93 |
|
94 |
+
```
|
95 |
from interface.SpeakerCounter import SpeakerCounter
|
96 |
|
97 |
wav_path = "path/to/your/audio.wav"
|
|
|
103 |
|
104 |
# Run inference
|
105 |
speaker_counter.classify_file(wav_path)
|
106 |
+
```
|
107 |
|
108 |
### For a Self-supervised model:
|
109 |
+
```
|
110 |
from interface.SpeakerCounterSelfsupervisedMLP import SpeakerCounter
|
111 |
|
112 |
wav_path = "path/to/your/audio.wav"
|
|
|
118 |
|
119 |
# Run inference
|
120 |
audio_classifier.classify_file(wav_path)
|
121 |
+
```
|
122 |
|
123 |
|
124 |
## **Setup and Training Instructions**
|
|
|
181 |
### **This Project was developed completely using [SpeechBrain](https://speechbrain.github.io/) **
|
182 |
|
183 |
## Reference
|
184 |
+
```
|
185 |
@misc{speechbrain,
|
186 |
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
|
187 |
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
|
|
|
191 |
primaryClass={eess.AS},
|
192 |
note={arXiv:2106.04624}
|
193 |
}
|
194 |
+
```
|