url
stringlengths 23
7.17k
| text
stringlengths 0
1.65M
|
---|---|
https://huggingface.co/login?next=%2Fsuperb | Log In
Don't have an account? Sign Up
Username or Email address Password
Forgot your password?
SSO is available for companies |
https://huggingface.co/datasets?other=source_datasets%3Aextended%7Clibrispeech_asr | Active filters: extended|librispeech_asr |
https://huggingface.co/spaces/autoevaluate/model-evaluator?dataset=anton-l/superb | App Files Files
Community
57 |
https://huggingface.co/datasets?other=arxiv%3A2105.01051 | Active filters: 2105.01051 |
https://huggingface.co/superb/wav2vec2-large-superb-sid | Wav2Vec2-Large for Speaker Identification
Model description
This is a ported version of S3PRL's Wav2Vec2 for the SUPERB Speaker Identification task.
The base model is wav2vec2-large-lv60, which is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
For more information refer to SUPERB: Speech processing Universal PERformance Benchmark
Task and dataset description
Speaker Identification (SI) classifies each utterance for its speaker identity as a multi-class classification, where speakers are in the same predefined set for both training and testing. The widely used VoxCeleb1 dataset is adopted
For the original model's training and evaluation instructions refer to the S3PRL downstream task README.
Usage examples
You can use the model via the Audio Classification pipeline:
from datasets import load_dataset from transformers import pipeline dataset = load_dataset("anton-l/superb_demo", "si", split="test") classifier = pipeline("audio-classification", model="superb/wav2vec2-large-superb-sid") labels = classifier(dataset[0]["file"], top_k=5)
Or use the model directly:
import torch import librosa from datasets import load_dataset from transformers import Wav2Vec2ForSequenceClassification, Wav2Vec2FeatureExtractor def map_to_array(example): speech, _ = librosa.load(example["file"], sr=16000, mono=True) example["speech"] = speech return example # load a demo dataset and read audio files dataset = load_dataset("anton-l/superb_demo", "si", split="test") dataset = dataset.map(map_to_array) model = Wav2Vec2ForSequenceClassification.from_pretrained("superb/wav2vec2-large-superb-sid") feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("superb/wav2vec2-large-superb-sid") # compute attention masks and normalize the waveform if needed inputs = feature_extractor(dataset[:2]["speech"], sampling_rate=16000, padding=True, return_tensors="pt") logits = model(**inputs).logits predicted_ids = torch.argmax(logits, dim=-1) labels = [model.config.id2label[_id] for _id in predicted_ids.tolist()]
Eval results
The evaluation metric is accuracy.
s3prl transformers
test 0.8614 0.8613
BibTeX entry and citation info
@article{yang2021superb, title={SUPERB: Speech processing Universal PERformance Benchmark}, author={Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y and Liu, Andy T and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and others}, journal={arXiv preprint arXiv:2105.01051}, year={2021} } |
https://huggingface.co/datasets?other=annotations_creators%3Aother | liwu/MNBVC
Viewer β’ Updated 8 days ago β’ 802 β’ 250
Moo/korean-parallel-corpora
Viewer β’ Updated Jul 1, 2022 β’ 135 β’ 4
speech_commands
Viewer β’ Updated Jun 1 β’ 751 β’ 13
clue
Viewer β’ Updated May 25 β’ 2.05k β’ 26
discovery
Viewer β’ Updated Jun 2 β’ 540 β’ 5
glue
Viewer β’ Updated Jun 1 β’ 1.31M β’ 236
hindi_discourse
Viewer β’ Updated Jan 25 β’ 78 β’ 1
indic_glue
Viewer β’ Updated Jun 9 β’ 2.37k β’ 4
kannada_news
Viewer β’ Updated Jan 25 β’ 73
qa4mre
Viewer β’ Updated Apr 5 β’ 1.02k β’ 2
snow_simplified_japanese_corpus
Viewer β’ Updated Nov 3, 2022 β’ 266 β’ 11
superb
Viewer β’ Updated Jan 25 β’ 3.37k β’ 18
Atsushi/fungi_diagnostic_chars_comparison_japanese
Viewer β’ Updated Aug 26 β’ 93
Atsushi/fungi_indexed_mycological_papers_japanese
Viewer β’ Updated Aug 26 β’ 73
Atsushi/fungi_trait_circus_database
Viewer β’ Updated Dec 26, 2022 β’ 72
KBLab/overlim
Viewer β’ Updated Oct 25, 2022 β’ 1.27k β’ 3
anton-l/superb
Viewer β’ Updated Jul 4, 2022 β’ 254 β’ 1
kudo-research/mustc-en-es-text-only
Viewer β’ Updated Oct 22, 2022 β’ 2
meghanabhange/hilm141021
Viewer β’ Updated Oct 20, 2022 β’ 2
meghanabhange/hitalm141021
Viewer β’ Updated Oct 20, 2022 β’ 2
meghanabhange/talm141021
Viewer β’ Updated Oct 20, 2022 β’ 2
sebastian-hofstaetter/tripclick-training
Viewer β’ Updated Jul 26, 2022 β’ 2
yuanchuan/annotated_reference_strings
Viewer β’ Updated Oct 26, 2022 β’ 4 β’ 1
crabz/stsb-sk
Viewer β’ Updated Oct 23, 2022 β’ 2
adv_glue
Viewer β’ Updated Jun 1 β’ 1.11k β’ 4
hackathon-pln-es/readability-es-caes
Viewer β’ Updated Apr 13 β’ 4
mwritescode/slither-audited-smart-contracts
Viewer β’ Updated Jul 14, 2022 β’ 4.18k β’ 15
silver/lccc
Viewer β’ Updated Nov 6, 2022 β’ 4 β’ 10
lccc
Viewer β’ Updated Nov 18, 2022 β’ 84 β’ 13
JulesBelveze/tldr_news
Preview β’ Updated Aug 5, 2022 β’ 1.42k β’ 7 |
https://huggingface.co/superb-hidden-set | 1
SUPERB Hidden-set Committee
superb-hidden-set
Research interests
None yet
Organizations
models
None public yet
datasets
None public yet |
https://huggingface.co/datasets/anton-l/superb/discussions | Hugging Face
Sub-tasks: keyword-spotting speaker-identification intent-classification
Languages: English
Multilinguality: monolingual
Size Categories: unknown
Language Creators: other
Annotations Creators: other
Source Datasets: original extended|librispeech_asr extended|other-librimix
ArXiv:
License:
Dataset card Files Files and versions
Community
3
New discussion
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
View closed (2)
Fix task tags
#3 opened 11 months ago by albertvillanova |
https://huggingface.co/superb/superb-submission | SUPERB Submission Template
Welcome to the SUPERB Challenge! SUPERB is a collection of benchmarking resources to evaluate the capability of a universal shared representation for speech processing. It comes with a benchmark on the publicly available datasets and a challenge on a secret/not released hidden dataset. In SUPERB Challenge, a challenging hidden dataset is newly recorded to evaluate the ultimate generaliziblity across various tasks and data.
You can participate the challenge by simply submitting your self-supervised (SSL) pretrained models (model definition & pretrained weights), and we benchmark it with the hidden datasets. This repository constains useful tools to let you easliy submit your models privately for evaluation to the challenge hidden-set leaderboard.
Generate a submission template
Validate the format/interface correctness of your model
Upload to Huggingface's Hub (privately)
Submit the upload information to SUPERB website
Note 1.
We accept pre-trained models in PyTorch by default. If you wish to submit upstreams in non-PyTorch frameworks, please mail to superb.announcement@gmail.com!
Note 2.
If you are not feasible to submit the pre-trained model, please mail to superb.announcement@gmail.com for us to see how to help!
Quickstart
1. Add model interfaces
forward
Extract features from waveforms.
Input: A list of waveforms in 16000 Hz
SAMPLE_RATE = 16000 BATCH_SIZE = 8 EXAMPLE_SEC = 10 wavs = [torch.randn(SAMPLE_RATE * EXAMPLE_SEC).cuda() for _ in range(BATCH_SIZE)]
Output: A dictionary with a key "hidden_states" (for compatiblility with old ver.). The value is a list of padded sequences in the same shape of (batch_size, max_sequence_length_of_batch, hidden_size) for weighted-sum to work. It is welcome to perform some task-specified / independent pre- / post-processing on the upstream's raw hidden-sets, including upsampling and downsampling. However, all the values must come from a single upstream model:
tasks = ["hidden_states", "PR", "SID", "ER", "ASR", "ASV", "SD", "QbE", "ST", "SS", "SE", "secret"] for task in tasks: # you can do task-specified pre- / post-processing depend on the arg "upstream_feature_selection" results = upstream(wavs, upstream_feature_selection=task) hidden_states = results["hidden_states"] assert isinstance(results, dict) assert isinstance(hidden_states, list) for state in hidden_states: assert isinstance(state, torch.Tensor) assert state.dim() == 3, "(batch_size, max_sequence_length_of_batch, hidden_size)" assert state.shape == hidden_states[0].shape
get_downsample_rates
Provide the downsample rate from 16000 Hz waveforms for each task's representation in the dict. For the standard 10ms stride representation, the downsample rate is 160.
SAMPLE_RATE = 16000 MSEC_PER_SEC = 1000 downsample_rate = SAMPLE_RATE * 10 / MSEC_PER_SEC # 160
The downsample rate will be used to:
Calculate the valid representation length of each utterance in the output padded representation.
Prepare the training materials according to the representation's downsample rate for frame-level tasks, e.g. SD, SE, and SS.
Input: the task key (str)
Output: the downsample rate (int) of the representation for that task
for task in tasks: assert isinstance(task, str) downsample_rate = upstream.get_downsample_rate(task) assert isinstance(downsample_rate, int) print("The upstream's representation for {task}" f" has the downsample rate of {downsample_rate}.")
2. Create an account and organization on the Hugging Face Hub
First create an account on the Hugging Face Hub and you can sign up here if you haven't already! Next, create a new organization and invite the SUPERB Hidden Set Committee to join. You will upload your model to a repository under this organization so that members inside it can access the model which is not publicly available.
superb-hidden-set
3. Create a template repository on your machine
The next step is to create a template repository on your local machine that contains various files and a CLI to help you validate and submit your pretrained models. The Hugging Face Hub uses Git Large File Storage (LFS) to manage large files, so first install it if you don't have it already. For example, on macOS you can run:
brew install git-lfs git lfs install
Next, run the following commands to create the repository. We recommend creating a Python virtual environment for the project, e.g. with Anaconda:
# Create and activate a virtual environment conda create -n superb-submit python=3.8 && conda activate superb-submit # Install the following libraries pip install cookiecutter huggingface-hub==0.0.16 # Create the template repository cookiecutter git+https://huggingface.co/superb/superb-submission
This will ask you to specify your Hugging Face Hub username, password, organisation, and the name of the repository:
hf_hub_username [<huggingface>]: hf_hub_password [<password>]: hf_hub_organisation [superb-submissions]: repo_name [<my-superb-submissions>]:
This will trigger the following steps:
Create a private dataset repository on the Hugging Face Hub under {hf_hub_organisation}/{repo_name}
Clone the repository to your local machine
Add various template files, commit them locally to the repository, and push them to the Hub
The resulting repository should have the following structure:
my-superb-submission βββ LICENSE βββ README.md <- The README with submission instructions βββ cli.py <- The CLI for validating predictions etc βββ requirements.txt <- The requirements packages for the submissions βββ expert.py <- Your model definition βββ model.pt <- Your model weights
4. Install the dependencies
The final step is to install the project's dependencies:
# Navigate to the template repository cd my-superb-submission # Install dependencies python -m pip install -r requirements.txt
That's it! You're now all set to start pretraining your speech models - see the instructions below on how to submit them to the Hub.
Submitting to the leaderboard
To make a submission to the leaderboard, there are 4 main steps:
Modify expert.py and change model.pt so we can initialize an upstream model following the challenge policy by:
upstream = UpstreamExpert(ckpt="./model.pt")
Package Dependency: Note that we only install torch package so far by following the above steps. If your model needs more packages, you can modify the requirement.txt to meet your need and install them inside the current conda environment. We will install the packages you list in the requirement.txt before initializing the upstream model.
Validate the upstream model's interface meets the requirements in the challenge policy. If everything is correct, you should see the following message: "All submission files validated! Now you can make a submission."
python cli.py validate
Push the model to the Hub! If there are no errors, you should see the following message: "Upload successful!"
python cli.py upload "commit message: my best model"
Make a submission at SUPERB website by uniquely indentifying this uploaded model with the following information, which can be shown by:
python cli.py info
Organization Name
Repository Name
Commit Hash (full 40 characters)
After you finish the above 4 steps. You will see a new entry in your SUPERB profile page (need login) which does not have any benchmark numbers yet. Please wait for us to finetuned it on the hidden dataset and get the benchmark results. The results will be revealed within one week. Please stay tuned! |
https://huggingface.co/leo19941227 | Shu-wen (Leo) Yang
leo19941227
Research interests
speech, self-supervised learning
Organizations
models 3
datasets 3 |
https://huggingface.co/datasets/anton-l/superb/tree/main | Hugging Face
Sub-tasks: keyword-spotting speaker-identification intent-classification
Languages: English
Multilinguality: monolingual
Size Categories: unknown
Language Creators: other
Annotations Creators: other
Source Datasets: original extended|librispeech_asr extended|other-librimix
ArXiv:
License:
Dataset card Files Files and versions
Community
3
superb
3 contributors
History: 3 commits
anton-l HF staff
julien-c HF staff
Fix `license` metadata (#1)
a491815 about 1 year ago
dummy Upload almost 2 years ago
.gitattributes
1.17 kB initial commit almost 2 years ago
README.md
21.1 kB Fix `license` metadata (#1) about 1 year ago
dataset_infos.json
38 kB Upload almost 2 years ago
superb.py
30.2 kB Upload almost 2 years ago |