Joseph Feng commited on
Commit
de6da40
1 Parent(s): 0d9842f

enable task-specified preprocessing

Browse files
README.md CHANGED
@@ -2,7 +2,7 @@
2
 
3
  Welcome to the [SUPERB Challenge](https://superbbenchmark.org/challenge-slt2022/challenge_overview)! SUPERB is a collection of benchmarking resources to evaluate the capability of a universal shared representation for speech processing. It comes with a benchmark on the publicly available datasets and a challenge on a secret/not released hidden dataset. In SUPERB Challenge, a challenging hidden dataset is newly recorded to evaluate the ultimate generaliziblity across various tasks and data.
4
 
5
- You can participate the challenge by simply submitting your self-supervised (SSL) pretrained models (model definition & pretrained weights), and we benchmark it with the hidden dataset. This repository constains useful tools to let you easliy [submit](https://superbbenchmark.org/submit) your models ***privately*** for evaluation to [the challenge hidden-set leaderboard](https://superbbenchmark.org/leaderboard?track=constrained&subset=Hidden+Dev+Set).
6
 
7
  1. Generate a submission template
8
  2. Validate the format/interface correctness of your model
@@ -32,16 +32,17 @@ Extract features from waveforms.
32
  BATCH_SIZE = 8
33
  EXAMPLE_SEC = 10
34
  wavs = [torch.randn(SAMPLE_RATE * EXAMPLE_SEC).cuda() for _ in range(BATCH_SIZE)]
35
- results = upstream(wavs)
36
  ```
37
 
38
- - **Output:** A dictionary with a key for each task. If any task-specific key is not presented, a "hidden_states" key should be provided as the default key. The value for each key is **a list** of padded sequences in the same shape of **(batch_size, max_sequence_length_of_batch, hidden_size)** for weighted-sum to work. It is welcome to perform some preprocessing on the upstream's raw hidden-sets, including upsampling and downsampling. However, all the values must come from **a single upstream model**:
39
 
40
  ```python
41
- assert isinstance(results, dict)
42
- tasks = ["PR", "SID", "ER", "ASR", "ASV", "SD", "QbE", "ST", "SS", "SE"]
43
  for task in tasks:
44
- hidden_states = results.get(task, "hidden_states")
 
 
 
45
  assert isinstance(hidden_states, list)
46
 
47
  for state in hidden_states:
2
 
3
  Welcome to the [SUPERB Challenge](https://superbbenchmark.org/challenge-slt2022/challenge_overview)! SUPERB is a collection of benchmarking resources to evaluate the capability of a universal shared representation for speech processing. It comes with a benchmark on the publicly available datasets and a challenge on a secret/not released hidden dataset. In SUPERB Challenge, a challenging hidden dataset is newly recorded to evaluate the ultimate generaliziblity across various tasks and data.
4
 
5
+ You can participate the challenge by simply submitting your self-supervised (SSL) pretrained models (model definition & pretrained weights), and we benchmark it with the hidden datasets. This repository constains useful tools to let you easliy [submit](https://superbbenchmark.org/submit) your models ***privately*** for evaluation to [the challenge hidden-set leaderboard](https://superbbenchmark.org/leaderboard?track=constrained&subset=Hidden+Dev+Set).
6
 
7
  1. Generate a submission template
8
  2. Validate the format/interface correctness of your model
32
  BATCH_SIZE = 8
33
  EXAMPLE_SEC = 10
34
  wavs = [torch.randn(SAMPLE_RATE * EXAMPLE_SEC).cuda() for _ in range(BATCH_SIZE)]
 
35
  ```
36
 
37
+ - **Output:** A dictionary with a key "hidden_states" (for compatiblility with old ver.). The value is **a list** of padded sequences in the same shape of **(batch_size, max_sequence_length_of_batch, hidden_size)** for weighted-sum to work. It is welcome to perform some task-specified / independent pre- / post-processing on the upstream's raw hidden-sets, including upsampling and downsampling. However, all the values must come from **a single upstream model**:
38
 
39
  ```python
40
+ tasks = ["hidden_states", "PR", "SID", "ER", "ASR", "ASV", "SD", "QbE", "ST", "SS", "SE", "secret"]
 
41
  for task in tasks:
42
+ # you can do task-specified pre- / post-processing depend on the arg "upstream_feature_selection"
43
+ results = upstream(wavs, upstream_feature_selection=task)
44
+ hidden_states = results["hidden_states"]
45
+ assert isinstance(results, dict)
46
  assert isinstance(hidden_states, list)
47
 
48
  for state in hidden_states:
{{cookiecutter.repo_name}}/README.md CHANGED
@@ -2,7 +2,7 @@
2
 
3
  Welcome to the [SUPERB Challenge](https://superbbenchmark.org/challenge-slt2022/challenge_overview)! SUPERB is a collection of benchmarking resources to evaluate the capability of a universal shared representation for speech processing. It comes with a benchmark on the publicly available datasets and a challenge on a secret/not released hidden dataset. In SUPERB Challenge, a challenging hidden dataset is newly recorded to evaluate the ultimate generaliziblity across various tasks and data.
4
 
5
- You can participate the challenge by simply submitting your self-supervised (SSL) pretrained models (model definition & pretrained weights), and we benchmark it with the hidden dataset. This repository constains useful tools to let you easliy [submit](https://superbbenchmark.org/submit) your models ***privately*** for evaluation to [the challenge hidden-set leaderboard](https://superbbenchmark.org/leaderboard?track=constrained&subset=Hidden+Dev+Set).
6
 
7
  1. Generate a submission template
8
  2. Validate the format/interface correctness of your model
@@ -32,16 +32,17 @@ Extract features from waveforms.
32
  BATCH_SIZE = 8
33
  EXAMPLE_SEC = 10
34
  wavs = [torch.randn(SAMPLE_RATE * EXAMPLE_SEC).cuda() for _ in range(BATCH_SIZE)]
35
- results = upstream(wavs)
36
  ```
37
 
38
- - **Output:** A dictionary with a key for each task. If any task-specific key is not presented, a "hidden_states" key should be provided as the default key. The value for each key is **a list** of padded sequences in the same shape of **(batch_size, max_sequence_length_of_batch, hidden_size)** for weighted-sum to work. It is welcome to perform some preprocessing on the upstream's raw hidden-sets, including upsampling and downsampling. However, all the values must come from **a single upstream model**:
39
 
40
  ```python
41
- assert isinstance(results, dict)
42
- tasks = ["PR", "SID", "ER", "ASR", "ASV", "SD", "QbE", "ST", "SS", "SE"]
43
  for task in tasks:
44
- hidden_states = results.get(task, results["hidden_states"])
 
 
 
45
  assert isinstance(hidden_states, list)
46
 
47
  for state in hidden_states:
2
 
3
  Welcome to the [SUPERB Challenge](https://superbbenchmark.org/challenge-slt2022/challenge_overview)! SUPERB is a collection of benchmarking resources to evaluate the capability of a universal shared representation for speech processing. It comes with a benchmark on the publicly available datasets and a challenge on a secret/not released hidden dataset. In SUPERB Challenge, a challenging hidden dataset is newly recorded to evaluate the ultimate generaliziblity across various tasks and data.
4
 
5
+ You can participate the challenge by simply submitting your self-supervised (SSL) pretrained models (model definition & pretrained weights), and we benchmark it with the hidden datasets. This repository constains useful tools to let you easliy [submit](https://superbbenchmark.org/submit) your models ***privately*** for evaluation to [the challenge hidden-set leaderboard](https://superbbenchmark.org/leaderboard?track=constrained&subset=Hidden+Dev+Set).
6
 
7
  1. Generate a submission template
8
  2. Validate the format/interface correctness of your model
32
  BATCH_SIZE = 8
33
  EXAMPLE_SEC = 10
34
  wavs = [torch.randn(SAMPLE_RATE * EXAMPLE_SEC).cuda() for _ in range(BATCH_SIZE)]
 
35
  ```
36
 
37
+ - **Output:** A dictionary with a key "hidden_states" (for compatiblility with old ver.). The value is **a list** of padded sequences in the same shape of **(batch_size, max_sequence_length_of_batch, hidden_size)** for weighted-sum to work. It is welcome to perform some task-specified / independent pre- / post-processing on the upstream's raw hidden-sets, including upsampling and downsampling. However, all the values must come from **a single upstream model**:
38
 
39
  ```python
40
+ tasks = ["hidden_states", "PR", "SID", "ER", "ASR", "ASV", "SD", "QbE", "ST", "SS", "SE", "secret"]
 
41
  for task in tasks:
42
+ # you can do task-specified pre- / post-processing depend on the arg "upstream_feature_selection"
43
+ results = upstream(wavs, upstream_feature_selection=task)
44
+ hidden_states = results["hidden_states"]
45
+ assert isinstance(results, dict)
46
  assert isinstance(hidden_states, list)
47
 
48
  for state in hidden_states:
{{cookiecutter.repo_name}}/expert.py CHANGED
@@ -15,7 +15,8 @@ class Model(nn.Module):
15
  self.model1 = nn.Linear(1, HIDDEN_DIM)
16
  self.model2 = nn.Linear(HIDDEN_DIM, HIDDEN_DIM)
17
 
18
- def forward(self, wavs):
 
19
  hidden = self.model1(wavs)
20
  # hidden: (batch_size, max_len, hidden_dim)
21
 
@@ -25,15 +26,24 @@ class Model(nn.Module):
25
  return [hidden, feature]
26
 
27
  class UpstreamExpert(nn.Module):
28
- def __init__(self, ckpt: str = "./model.pt", **kwargs):
 
 
 
 
29
  """
30
  Args:
31
  ckpt:
32
  The checkpoint path for loading your pretrained weights.
33
  Should be fixed as model.pt for SUPERB Challenge.
 
 
 
 
34
  """
35
  super().__init__()
36
  self.name = "[Example UpstreamExpert]"
 
37
 
38
  # You can use ckpt to load your pretrained weights
39
  ckpt = torch.load(ckpt, map_location="cpu")
@@ -57,21 +67,12 @@ class UpstreamExpert(nn.Module):
57
  wavs = pad_sequence(wavs, batch_first=True).unsqueeze(-1)
58
  # wavs: (batch_size, max_len, 1)
59
 
60
- hidden_states = self.model(wavs)
61
 
 
 
62
  # The "hidden_states" key will be used as default in many cases
63
  # Others keys in this example are presented for SUPERB Challenge
64
  return {
65
  "hidden_states": hidden_states,
66
- "PR": hidden_states,
67
- "SID": hidden_states,
68
- "ER": hidden_states,
69
- "ASR": hidden_states,
70
- "QbE": hidden_states,
71
- "ASV": hidden_states,
72
- "SD": hidden_states,
73
- "ST": hidden_states,
74
- "SE": hidden_states,
75
- "SS": hidden_states,
76
- "secret": hidden_states,
77
  }
15
  self.model1 = nn.Linear(1, HIDDEN_DIM)
16
  self.model2 = nn.Linear(HIDDEN_DIM, HIDDEN_DIM)
17
 
18
+ def forward(self, wavs, upstream_feature_selection="hidden_states"):
19
+ # You can do task-specified pre- / post-processing based on upstream_feature_selection
20
  hidden = self.model1(wavs)
21
  # hidden: (batch_size, max_len, hidden_dim)
22
 
26
  return [hidden, feature]
27
 
28
  class UpstreamExpert(nn.Module):
29
+ def __init__(
30
+ self,
31
+ ckpt: str = "./model.pt",
32
+ upstream_feature_selection: str = "hidden_states",
33
+ **kwargs):
34
  """
35
  Args:
36
  ckpt:
37
  The checkpoint path for loading your pretrained weights.
38
  Should be fixed as model.pt for SUPERB Challenge.
39
+ upstream_feature_selection:
40
+ The value could be
41
+ 'hidden_states', 'PR', 'SID', 'ER', 'ASR', 'QbE', 'ASV', 'SD', 'ST', 'SE', 'SS', 'secret', or others(new tasks).
42
+ You can use it to control which task-specified pre- / post-processing to do.
43
  """
44
  super().__init__()
45
  self.name = "[Example UpstreamExpert]"
46
+ self.upstream_feature_selection = upstream_feature_selection
47
 
48
  # You can use ckpt to load your pretrained weights
49
  ckpt = torch.load(ckpt, map_location="cpu")
67
  wavs = pad_sequence(wavs, batch_first=True).unsqueeze(-1)
68
  # wavs: (batch_size, max_len, 1)
69
 
70
+ hidden_states = self.model(wavs, upstream_feature_selection=self.upstream_feature_selection)
71
 
72
+ # Deprecated! Do not do any task-specified postprocess below
73
+ # You can use the init arg "upstream_feature_selection" to control which task-specified pre- / post-processing to do.
74
  # The "hidden_states" key will be used as default in many cases
75
  # Others keys in this example are presented for SUPERB Challenge
76
  return {
77
  "hidden_states": hidden_states,
 
 
 
 
 
 
 
 
 
 
 
78
  }