The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowInvalid
Message:      JSON parse error: Column(/annot_1/instruction) changed from number to string in row 1
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 160, in _generate_tables
                  df = pandas_read_json(f)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json
                  return pd.read_json(path_or_buf, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 815, in read_json
                  return json_reader.read()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1025, in read
                  obj = self._get_object_parser(self.data)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1051, in _get_object_parser
                  obj = FrameParser(json, **kwargs).parse()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1187, in parse
                  self._parse()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1403, in _parse
                  ujson_loads(json, precise_float=self.precise_float), dtype=None
              ValueError: Trailing data
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1854, in _prepare_split_single
                  for _, table in generator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 163, in _generate_tables
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 137, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Column(/annot_1/instruction) changed from number to string in row 1
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1417, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1049, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1000, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1897, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

id_source
string
id_target
string
index_paragraph
int64
id_paragraph
string
parag-1
string
parag-2
string
list-sentences-1
list
list-sentences-2
list
BxmIj0nw6E
ZMGk-Rx5Lm
0
BxmIj0nw6E.ZMGk-Rx5Lm.00
Discussion on Improvements : We evaluated our approach by using more indicator functions (i.e., more r i ) and plotted the results in Fig. From the figure, the performance of our approach increases with more indicator functions being used and eventually converges. Additionally, the limit for each dataset in Fig. 4 is determined by the form of utilized indicator functions, which is g ( x ) = {|| x || ≤ r } in this work. To increase the limit, more advanced indicator functions are required, which is mentioned in Remark 3 that exploring other indicator functions will be our future works.
Discussion on Improvements : All the experiments used ten indicator functions. However, we have evaluated our approach by using more indicator functions (i.e., more r i ) and plotted the results in Fig. From the figure, the performance of our approach increases with more indicator functions being used and eventually converges. Additionally, the limit for each dataset in Fig. 4 is determined by the form of utilized indicator functions, which is g ( x ) = {|| x || ≤ r } in this work. To increase the limit, more advanced indicator functions are required, which is mentioned in Remark 3 that exploring other indicator functions will be our future works.
[ { "text": "Discussion on Improvements : We evaluated our approach by using more indicator functions (i.e., more r i ) and plotted the results in Fig." }, { "text": "From the figure, the performance of our approach increases with more indicator functions being used and eventually converges." }, { "text": "Additionally, the limit for each dataset in Fig." }, { "text": "4 is determined by the form of utilized indicator functions, which is g ( x ) = \n" }, { "text": "{|| x || ≤ r } in this work." }, { "text": "To increase the limit, more advanced indicator functions are required, which is mentioned in Remark 3 that exploring other indicator functions will be our future works." } ]
[ { "text": "Discussion on Improvements : All the experiments used ten indicator functions. However, we have evaluated our approach by using more indicator functions (i.e., more r i ) and plotted the results in Fig." }, { "text": "From the figure, the performance of our approach increases with more indicator functions being used and eventually converges." }, { "text": "Additionally, the limit for each dataset in Fig." }, { "text": "4 is determined by the form of utilized indicator functions, which is g ( x ) = \n" }, { "text": "{|| x || ≤ r } in this work." }, { "text": "To increase the limit, more advanced indicator functions are required, which is mentioned in Remark 3 that exploring other indicator functions will be our future works." } ]
ao24TOMZHY
i3w1gRPhug
0
ao24TOMZHY.i3w1gRPhug.00
Reinforcement learning (RL) aims to solve sequential decision problems and has received extensive attention in recent years (Mnih et al., 2015). However, the practical applications of RL meet several challenges, such as risky attempts during exploration, time-consuming data collecting phase and high sample complexity. Offline RL is capable of tackling these issues without interaction with the environment. It can get rid of unsafe exploration during training and could tap into existing large-scale datasets (Gulcehre et al., 2020; Fu et al., 2020).
Reinforcement learning (RL) aims to solve sequential decision problems and has received extensive attention in recent years (Mnih et al., 2015). However, the practical applications of RL meet several challenges, such as risky attempts during exploration and time-consuming data collecting phase. Offline RL is capable of tackling these issues without interaction with the environment. It can get rid of unsafe exploration and could tap into existing large-scale datasets (Gulcehre et al., 2020).
[ { "text": "Reinforcement learning (RL) aims to solve sequential decision problems and has received extensive attention in recent years (Mnih et al., 2015)." }, { "text": "However, the practical applications of RL meet several challenges, such as risky attempts during exploration, time-consuming data collecting phase and high sample complexity." }, { "text": "Offline RL is capable of tackling these issues without interaction with the environment." }, { "text": "It can get rid of unsafe exploration during training and could tap into existing large-scale datasets (Gulcehre et al., 2020; Fu et al., 2020)." } ]
[ { "text": "Reinforcement learning (RL) aims to solve sequential decision problems and has received extensive attention in recent years (Mnih et al., 2015)." }, { "text": "However, the practical applications of RL meet several challenges, such as risky attempts during exploration and time-consuming data collecting phase." }, { "text": "Offline RL is capable of tackling these issues without interaction with the environment." }, { "text": "It can get rid of unsafe exploration and could tap into existing large-scale datasets (Gulcehre et al., 2020)." } ]
ao24TOMZHY
i3w1gRPhug
1
ao24TOMZHY.i3w1gRPhug.01
Ghasemipour et al., 2021). However, it suffers from extrapolation error due to OOD actions. Some works attempt to penalize the Q-values of OOD actions (Kumar et al., 2020; An et al., 2021). Other methods force the trained policy to be close to the behavior policy by KL divergence (Wu et al., 2019), behavior cloning (Fujimoto & Gu, 2021), or Maximum Mean Discrepancy(MMD) (Kumar et al., 2019). These methods cannot eliminate extrapolation error and require a regularization hyperparameter to control the constraint level to balance pessimism and generalization. In addition, they define distances implicitly or explicitly to measure the trained policy’s closeness to the behavior policy. It is still challenging to determine which measurement matches offline RL best. Another branch chooses to only refer to the Q-values of in-sample actions when formulating the Bellman target without querying the values of actions not contained in the dataset (Brandfonbrener et al., 2021;
Ghasemipour et al., 2021). However, it suffers from extrapolation error due to OOD actions. Some works attempt to penalize the Q-values of OOD actions (Kumar et al., 2020; An et al., 2021). Other methods force the trained policy to be close to the behavior policy by KL divergence (Wu et al., 2019), behavior cloning (Fujimoto & Gu, 2021), or Maximum Mean Discrepancy(MMD) (Kumar et al., 2019). These methods cannot eliminate extrapolation error and require a regularization hyperparameter to control the constraint level to balance pessimism and generalization. Another branch chooses to only refer to the Q-values of in-sample actions when formulating the Bellman target without querying the values of actions not contained in the dataset (Brandfonbrener et al., 2021;
[ { "text": "Ghasemipour et al., 2021)." }, { "text": "However, it suffers from extrapolation error due to OOD actions." }, { "text": "Some works attempt to penalize the Q-values of OOD actions (Kumar et al., 2020; An et al., 2021)." }, { "text": "Other methods force the trained policy to be close to the behavior policy by KL divergence (Wu et al., 2019), behavior cloning (Fujimoto & Gu, 2021), or Maximum Mean Discrepancy(MMD)" }, { "text": "(Kumar et al., 2019)." }, { "text": "These methods cannot eliminate extrapolation error and require a regularization hyperparameter to control the constraint level to balance pessimism and generalization." }, { "text": "In addition, they define distances implicitly or explicitly to measure the trained policy’s closeness to the behavior policy." }, { "text": "It is still challenging to determine which measurement matches offline RL best." }, { "text": "Another branch chooses to only refer to the Q-values of in-sample actions when formulating the Bellman target without querying the values of actions not contained in the dataset (Brandfonbrener et al., 2021;" } ]
[ { "text": "Ghasemipour et al., 2021)." }, { "text": "However, it suffers from extrapolation error due to OOD actions." }, { "text": "Some works attempt to penalize the Q-values of OOD actions (Kumar et al., 2020; An et al., 2021)." }, { "text": "Other methods force the trained policy to be close to the behavior policy by KL divergence (Wu et al., 2019), behavior cloning (Fujimoto & Gu, 2021), or Maximum Mean Discrepancy(MMD)" }, { "text": "(Kumar et al., 2019)." }, { "text": "These methods cannot eliminate extrapolation error and require a regularization hyperparameter to control the constraint level to balance pessimism and generalization." }, { "text": "" }, { "text": "" }, { "text": "Another branch chooses to only refer to the Q-values of in-sample actions when formulating the Bellman target without querying the values of actions not contained in the dataset (Brandfonbrener et al., 2021;" } ]
YrJvprgJOB
YiJLxDsi4n
0
YrJvprgJOB.YiJLxDsi4n.00
Remarks. Theorem 1 holds for all and possibly different lengths of the two data sequences. This highlights the RNTK’s ability to produce a similarity measure Θ( x , x (cid:48) ) even if the inputs are of different lengths, without resorting to ad hockery such as zero padding the inputs to the same length. Dealing with data of different length is in sharp contrast to common kernels such as the classical radial basis function and polynomial kernels and the current NTKs. We showcase this capability below in Section 4.
Remarks. Theorem 1 holds generally for any two data sequences, including different lengths ones. This highlights the RNTK’s ability to produce a similarity measure Θ( x , x (cid:48) ) even if the inputs are of different lengths, without resorting to heuristics such as zero padding the inputs to the to the max length of both sequences. Dealing with data of different length is in sharp contrast to common kernels such as the classical radial basis functions, polynomial kernels, and current NTKs. We showcase this capability below in Section 4.
[ { "text": "Remarks." }, { "text": "Theorem 1 holds for all and possibly different lengths of the two data sequences." }, { "text": "This highlights the RNTK’s ability to produce a similarity measure Θ( x , x (cid:48) ) even if the inputs are of different lengths, without resorting to ad hockery such as zero padding the inputs to the same length." }, { "text": "Dealing with data of different length is in sharp contrast to common kernels such as the classical radial basis function and polynomial kernels and the current NTKs." }, { "text": "We showcase this capability below in Section 4." } ]
[ { "text": "Remarks." }, { "text": "Theorem 1 holds generally for any two data sequences, including different lengths ones." }, { "text": "This highlights the RNTK’s ability to produce a similarity measure Θ( x , x (cid:48) ) even if the inputs are of different lengths, without resorting to heuristics such as zero padding the inputs to the to the max length of both sequences." }, { "text": "Dealing with data of different length is in sharp contrast to common kernels such as the classical radial basis functions, polynomial kernels, and current NTKs." }, { "text": "We showcase this capability below in Section 4." } ]
J05LrUaunL
G5IzR7XI7
0
J05LrUaunL.G5IzR7XI7.00
Simultaneously matching the best model on classification accuracy and achieving perfect approximation of human similarity might not be possible, but we hypothesize that a good trade-off between the two would benefit decision support. We propose a novel multi-task learning method that combines supervised learning and metric learning. We supplement the standard maximum likelihood objective with a loss function from Balntas et al. based on human annotation of triplet judgments: choosing which of the two candidates is more similar to a reference.
Simultaneously matching the best model on classification accuracy and achieving perfect approximation of human similarity might not be possible, but we hypothesize that a good trade-off between the two would benefit decision support. We propose a novel multi-task learning method that combines supervised learning and metric learning. We supplement the standard maximum likelihood objective with a loss function from Balntas et al. Our method learns from human annotations of similarity judgments among data instances in the triplet form.
[ { "text": "Simultaneously matching the best model on classification accuracy and achieving perfect approximation of human similarity might not be possible, but we hypothesize that a good trade-off between the two would benefit decision support." }, { "text": "We propose a novel multi-task learning method that combines supervised learning and metric learning." }, { "text": "We supplement the standard maximum likelihood objective with a loss function from Balntas et al." }, { "text": "based on human annotation of triplet judgments: choosing which of the two candidates is more similar to a reference." } ]
[ { "text": "Simultaneously matching the best model on classification accuracy and achieving perfect approximation of human similarity might not be possible, but we hypothesize that a good trade-off between the two would benefit decision support." }, { "text": "We propose a novel multi-task learning method that combines supervised learning and metric learning." }, { "text": "We supplement the standard maximum likelihood objective with a loss function from Balntas et al." }, { "text": "Our method learns from human annotations of similarity judgments among data instances in the triplet form." } ]
J05LrUaunL
G5IzR7XI7
1
J05LrUaunL.G5IzR7XI7.01
Filtering classification-inconsistent triplets. Human triplets may not always align with the goal of classification, i.e., annotator may pick a candidate from the incorrect class as closer than a candidate from the correct class, which we refer to as classification-inconsistent triplets . As our goal is to generate human-compatible decision-focused representations, such triplets may introduce signals in human visual similarity that is counterproductive to the specific classification problem. Wethus consider a variant of human-compatible representations, where we remove classification-inconsistent triplets in the training data; we refer to this condition as HC-filtered . Although filtering triplets may lose important human similarity information, it may also remove human noise andsteer humancompatible representations towards a more effective representations for classification.
Filtering classification-inconsistent triplets. Human triplets may not always align with classification: triplet annotators may choose the candidate from the incorrect class over the one from the correct class. We refer to these data points as classification-inconsistent triplets . We consider a variant of humancompatible representations where we isolate human intuition that’s compatible with classification and remove these classification-inconsistent triplets from the training set; we refer to this condition as HC-filtered . Filtering is yet another way to strike a balance between human intuition and classification. We leave further details on filtering in the appendix.
[ { "text": "Filtering classification-inconsistent triplets." }, { "text": "Human triplets may not always align with the goal of classification, i.e., annotator may pick a candidate from the incorrect class as closer than a candidate from the correct class, which we refer to as classification-inconsistent triplets ." }, { "text": "As our goal is to generate human-compatible decision-focused representations, such triplets may introduce signals in human visual similarity that is counterproductive to the specific classification problem." }, { "text": "Wethus consider a variant of human-compatible representations, where we remove classification-inconsistent triplets in the training data; we refer to this condition as HC-filtered . Although filtering triplets may lose important human similarity information, it may also remove human noise andsteer humancompatible representations towards a more effective representations for classification. " } ]
[ { "text": "Filtering classification-inconsistent triplets." }, { "text": "Human triplets may not always align with classification: triplet annotators may choose the candidate from the incorrect class over the one from the correct class." }, { "text": "We refer to these data points as classification-inconsistent triplets ." }, { "text": "We consider a variant of humancompatible representations where we isolate human intuition that’s compatible with classification and remove these classification-inconsistent triplets from the training set; we refer to this condition as HC-filtered . Filtering is yet another way to strike a balance between human intuition and classification. We leave further details on filtering in the appendix." } ]
J05LrUaunL
G5IzR7XI7
2
J05LrUaunL.G5IzR7XI7.02
• Head-to-head comparisons (“ H2H ”). To evaluate the quality of justification, we set up head-tohead comparisons between two justifications derived from two representations (denoted by R 1 vs. R 2 ) and examine the following question: given a test instance and two justifications from R 1 and R 2 , which justification (synthetic) human considers more similar. We report the fraction of rounds that R 1 is preferable. In addition to the typical justification for the predicted label, we also examine that for the other class as those examples will be used in decision support. We refer to the nearest example with the predicted label as NI , and nearest example in the other class as NO . • Neutral decision support . Following §2, we retrieve the nearest neighbors from each class. We use the accuracy of (synthetic) human as the measure of effective decision support. • Persuasive decision support . We retrieves the nearest example with the predicted label and the furthest example from the other class. If the representation is aligned with human similarity metric, this approach encourages people to follow the predicted label, which likely leads to over-reliance and may be unethical in practice. Here, we use this scenario as a surrogate to evaluate the quality of the learned representations.
• Head-to-head comparisons (“ H2H ”). To evaluate justification, we set up head-to-head comparisons between two representations ( R 1 vs. R 2 ) and ask: given a test instance and two justifications retrieved by R 1 and R 2 , which justification do humans consider as closer to the test instance? We report the fraction of rounds that R 1 is preferable. In addition to the typical justification for the predicted label, we also examine that for classes other than the predicted class, as those examples will be used in decision support for users to examine the plausibility of each class. We refer to the nearest example in the predicted class as NI , and the nearest example in the other class as NO . • Neutral decision support . Following §2, we retrieve the nearest neighbors from each class. We use the accuracy of humans as the measure of effective decision support. • Persuasive decision support . We retrieve the nearest example with the predicted label and the furthest example from the other class. If the representation is aligned with human similarity metric, this approach encourages people to follow the predicted label, which likely leads to over-reliance and may be unethical in practice. Here, we use this scenario as a surrogate to evaluate the quality of the learned representations.
[ { "text": "• Head-to-head comparisons (“ H2H ”)." }, { "text": "To evaluate the quality of justification, we set up head-tohead comparisons between two justifications derived from two representations (denoted by R 1 vs. R 2 ) and examine the following question: given a test instance and two justifications from R 1 and R 2 , which justification (synthetic) human considers more similar." }, { "text": "We report the fraction of rounds that R 1 is preferable." }, { "text": "In addition to the typical justification for the predicted label, we also examine that for the other class as those examples will be used in decision support." }, { "text": "We refer to the nearest example with the predicted label as NI , and nearest example in the other class as NO . • Neutral decision support ." }, { "text": "Following §2, we retrieve the nearest neighbors from each class." }, { "text": "We use the accuracy of (synthetic) human as the measure of effective decision support." }, { "text": "• Persuasive decision support ." }, { "text": "We retrieves the nearest example with the predicted label and the furthest example from the other class." }, { "text": "If the representation is aligned with human similarity metric, this approach encourages people to follow the predicted label, which likely leads to over-reliance and may be unethical in practice." }, { "text": "Here, we use this scenario as a surrogate to evaluate the quality of the learned representations." } ]
[ { "text": "• Head-to-head comparisons (“ H2H ”)." }, { "text": "To evaluate justification, we set up head-to-head comparisons between two representations ( R 1 vs. R 2 ) and ask: given a test instance and two justifications retrieved by R 1 and R 2 , which justification do humans consider as closer to the test instance?" }, { "text": "We report the fraction of rounds that R 1 is preferable." }, { "text": "In addition to the typical justification for the predicted label, we also examine that for classes other than the predicted class, as those examples will be used in decision support for users to examine the plausibility of each class." }, { "text": "We refer to the nearest example in the predicted class as NI , and the nearest example in the other class as NO . • Neutral decision support ." }, { "text": "Following §2, we retrieve the nearest neighbors from each class." }, { "text": "We use the accuracy of humans as the measure of effective decision support." }, { "text": "• Persuasive decision support ." }, { "text": "We retrieve the nearest example with the predicted label and the furthest example from the other class." }, { "text": "If the representation is aligned with human similarity metric, this approach encourages people to follow the predicted label, which likely leads to over-reliance and may be unethical in practice." }, { "text": "Here, we use this scenario as a surrogate to evaluate the quality of the learned representations." } ]
DC9K9Qc1xC
kBT3WMk5d
0
DC9K9Qc1xC.kBT3WMk5d.00
Decoupling) to do soft decoupling with theoretical guarantees. From this point forward, we assume that all of the parameters θ take the value from Θ ∗ , and omit θ of ˆ r ( · ; θ ) and ˆ o p ( · ; θ ) for ease of notation.
Decoupling) to do soft decoupling with theoretical guarantees. The proofs of the theorems in thissection are provided in Appendix § B. From this point forward, we assume that all of the parameters θ take the value from Θ ∗ , and omit θ of ˆ r ( · ; θ ) and ˆ o p ( · ; θ ) for ease of notation.
[ { "text": "Decoupling) to do soft decoupling with theoretical guarantees." }, { "text": "" }, { "text": "From this point forward, we assume that all of the parameters θ take the value from Θ ∗ , and omit θ of ˆ r ( · ; θ ) and ˆ o p ( · ; θ ) for ease of notation." } ]
[ { "text": "Decoupling) to do soft decoupling with theoretical guarantees." }, { "text": "The proofs of the theorems in thissection are provided in Appendix § B." }, { "text": "From this point forward, we assume that all of the parameters θ take the value from Θ ∗ , and omit θ of ˆ r ( · ; θ ) and ˆ o p ( · ; θ ) for ease of notation." } ]
DC9K9Qc1xC
kBT3WMk5d
1
DC9K9Qc1xC.kBT3WMk5d.01
Theorem 1 and Theorem 2 give the upper bound and lower bound of β , respectively. The proofs of these two theorems are provided in Appendix § B. In practice, we can tune the value of β to achieve the best global effect for soft decoupling. We refer to this method as Lipschitz Decoupling .
Theorem 1 and Theorem 2 give the upper bound and lower bound of β , respectively. In practice, we can tune the value of β to achieve the best effect for soft decoupling. We refer to this method as Lipschitz Decoupling .
[ { "text": "Theorem 1 and Theorem 2 give the upper bound and lower bound of β , respectively." }, { "text": "The proofs of these two theorems are provided in Appendix § B." }, { "text": "In practice, we can tune the value of β to achieve the best global effect for soft decoupling." }, { "text": "We refer to this method as Lipschitz Decoupling ." } ]
[ { "text": "Theorem 1 and Theorem 2 give the upper bound and lower bound of β , respectively." }, { "text": "" }, { "text": "In practice, we can tune the value of β to achieve the best effect for soft decoupling." }, { "text": "We refer to this method as Lipschitz Decoupling ." } ]
DC9K9Qc1xC
kBT3WMk5d
2
DC9K9Qc1xC.kBT3WMk5d.02
For convenience, we denote the expectation E (ˆ o ′ p x ; θ 2 , t )) as the estimation of the observation model after transformation, and Assumption 1 holds based on this expectation. We can prove that by choosing a sufficiently large t , BOM can achieve soft decoupling:
For convenience, we denote the expectation E (ˆ o ′ p x ; θ 2 , t )) as the estimation of the observation model after transformation, and Assumption 1 holds based on this expectation. It’s worth mentioningthat BOM can also achieve soft decoupling, as the following theorem shows:
[ { "text": "For convenience, we denote the expectation E (ˆ o ′ p" }, { "text": "x ; θ 2 , t )) as the estimation of the observation model after transformation, and Assumption 1 holds based on this expectation." }, { "text": "We can prove that by choosing a sufficiently large t , BOM can achieve soft decoupling:" } ]
[ { "text": "For convenience, we denote the expectation E (ˆ o ′ p" }, { "text": "x ; θ 2 , t )) as the estimation of the observation model after transformation, and Assumption 1 holds based on this expectation." }, { "text": "It’s worth mentioningthat BOM can also achieve soft decoupling, as the following theorem shows:" } ]
DC9K9Qc1xC
kBT3WMk5d
3
DC9K9Qc1xC.kBT3WMk5d.03
Data (uses the raw click data to train the ranker directly). Ranking models include DNN and Linear . We adopted the codes in ULTRA framework [5, 6] to implement RegressionEM, DLA, PairDebias and the ranking models, and kept the same hyperparameters. Note that Regression-EM, DLA and PairDebias are all group-level ULTR methods. HTE is individual-level ULTR, but it’s not a jointly learning algorithm, so we compared it in another scene.
Data (uses human-annotated relevance labels to train the ranker directly) and Click Data (uses theraw click data to train the ranker directly). Ranking models include DNN and Linear . We adopted the codes in ULTRA framework [5, 6] to implement RegressionEM, DLA, PairDebias and the ranking models, and kept the same hyperparameters. Note that Regression-EM, DLA and PairDebias are all group-level ULTR methods. HTE is individual-level ULTR, but it’s not a jointly learning algorithm, so we compared it in another scene.
[ { "text": "Data (uses the raw click data to train the ranker directly)." }, { "text": "Ranking models include DNN and Linear ." }, { "text": "We adopted the codes in ULTRA framework [5, 6] to implement RegressionEM, DLA, PairDebias and the ranking models, and kept the same hyperparameters." }, { "text": "Note that Regression-EM, DLA and PairDebias are all group-level ULTR methods." }, { "text": "HTE is individual-level ULTR, but it’s not a jointly learning algorithm, so we compared it in another scene." } ]
[ { "text": "Data (uses human-annotated relevance labels to train the ranker directly) and Click Data (uses theraw click data to train the ranker directly)." }, { "text": "Ranking models include DNN and Linear ." }, { "text": "We adopted the codes in ULTRA framework [5, 6] to implement RegressionEM, DLA, PairDebias and the ranking models, and kept the same hyperparameters." }, { "text": "Note that Regression-EM, DLA and PairDebias are all group-level ULTR methods." }, { "text": "HTE is individual-level ULTR, but it’s not a jointly learning algorithm, so we compared it in another scene." } ]
OfMrTfgiHV
VOSRdJqTB
0
OfMrTfgiHV.VOSRdJqTB.00
Learning directed causal relationships in temporal / spatial data is feasible as time and space both induce asymmetric dependencies. In the case of time-series data, a feature in the future cannot have effect on past values of other features. Fordata with spatial structure, a similar definition of causal dependency can be established based on the irreversibility of the spatial diffusion operator.
Learning directed causal relationships in temporal / spatial data is feasible as time and space both induce asymmetric dependencies. In the case of time-series data, a feature in the future cannot have effect on past values of other features. For spatial data, a similar definition of causal dependency can be established (Herrera Gómez et al., 2014).
[ { "text": "Learning directed causal relationships in temporal / spatial data is feasible as time and space both induce asymmetric dependencies." }, { "text": "In the case of time-series data, a feature in the future cannot have effect on past values of other features." }, { "text": "Fordata with spatial structure, a similar definition of causal dependency can be established based on the irreversibility of the spatial diffusion operator." } ]
[ { "text": "Learning directed causal relationships in temporal / spatial data is feasible as time and space both induce asymmetric dependencies." }, { "text": "In the case of time-series data, a feature in the future cannot have effect on past values of other features." }, { "text": "For spatial data, a similar definition of causal dependency can be established (Herrera Gómez et al., 2014)." } ]
OfMrTfgiHV
VOSRdJqTB
1
OfMrTfgiHV.VOSRdJqTB.01
• Although linear methods (LINGAM, linear Granger causality) have succeeded in various settings and can potentially scale to high feature numbers, these methods may completely fail when the feature dependency in data is highly complex and nonlinear. • As the number of conditional independence generally scales exponentially or at least polynomially with the feature size, applying CI-test based methods in data with high feature dimensionality is not realistic. Meanwhile, Granger-causality based methods build prediction models with the number equal to feature size. Therefore the time complexity of solving the stacked model is of polynomial level with respect to the feature size. • Edge sparsity isusually assumed by previous methods to maximize interpretability of the identified causal graph. However, In biological data, there exists a large proportion of nuisance features (e g. constitutively expressed genes), therefore the feature sparsity and edge sparsity need to be both included by an ideal model. This is of particular interest as false discoveryrate (FDR) control is favorable in biological data analysis. • While a large number of methods are designed for causal discovery in time-series data, only a limited number of present works aim for causal discovery in general graph-structured data. Time-series based methods cannot be directly adopted on data with multi-branch trajectory dynamics or spatial structures.
• Although linear methods (LINGAM, linear Granger causality) have succeeded in various settings and can potentially scale to high feature numbers, these methods may completely fail when the feature dependency in data is highly complex and nonlinear. • As the number of conditional independencies generally scales exponentially or at least polynomially with the feature size, applying causal discovery methods which are based on CI tests to high-dimensional data is not realistic. Distinctively, Granger-causality based methods are built with a prediction model for each feature in the data. The time complexity of solving the stacked prediction model for all features is of polynomial level with respect to the feature size. • In previous methods, the number of causal edges between features is assumed to be sparse (edge sparsity) to maximize interpretability of the identified causal graph. However, in biological data, there exists a large proportion of nuisance features. Also, one functional gene may activate a large number of downstream genes in neighboring cells. Sparsifying the number of interacting features (feature sparsity) has the potential to improve causal discovery in biological systems, which remains to be explored. • While a large number of methods are designed for causal discovery in time-series data, only a limited number of present works aim for causal discovery in general graph-structured data. Time-series based methods cannot be directly adopted on data with multi-branch trajectory dynamics or spatial structures.
[ { "text": "• Although linear methods (LINGAM, linear Granger causality) have succeeded in various settings and can potentially scale to high feature numbers, these methods may completely fail when the feature dependency in data is highly complex and nonlinear." }, { "text": "• As the number of conditional independence generally scales exponentially or at least polynomially with the feature size, applying CI-test based methods in data with high feature dimensionality is not realistic." }, { "text": "Meanwhile, Granger-causality based methods build prediction models with the number equal to feature size." }, { "text": "Therefore the time complexity of solving the stacked model is of polynomial level with respect to the feature size." }, { "text": "• Edge sparsity isusually assumed by previous methods to maximize interpretability of the identified causal graph." }, { "text": "However, In biological data, there exists a large proportion of nuisance features (e" }, { "text": "g. constitutively expressed genes), therefore the feature sparsity and edge sparsity need to be both included by an ideal model." }, { "text": "This is of particular interest as false discoveryrate (FDR) control is favorable in biological data analysis." }, { "text": "• While a large number of methods are designed for causal discovery in time-series data, only a limited number of present works aim for causal discovery in general graph-structured data." }, { "text": "Time-series based methods cannot be directly adopted on data with multi-branch trajectory dynamics or spatial structures." } ]
[ { "text": "• Although linear methods (LINGAM, linear Granger causality) have succeeded in various settings and can potentially scale to high feature numbers, these methods may completely fail when the feature dependency in data is highly complex and nonlinear." }, { "text": "• As the number of conditional independencies generally scales exponentially or at least polynomially with the feature size, applying causal discovery methods which are based on CI tests to high-dimensional data is not realistic." }, { "text": "Distinctively, Granger-causality based methods are built with a prediction model for each feature in the data." }, { "text": "The time complexity of solving the stacked prediction model for all features is of polynomial level with respect to the feature size." }, { "text": "• In previous methods, the number of causal edges between features is assumed to be sparse (edge sparsity) to maximize interpretability of the identified causal graph." }, { "text": "However, in biological data, there exists a large proportion of nuisance features." }, { "text": "Also, one functional gene may activate a large number of downstream genes in neighboring cells." }, { "text": "Sparsifying the number of interacting features (feature sparsity) has the potential to improve causal discovery in biological systems, which remains to be explored." }, { "text": "• While a large number of methods are designed for causal discovery in time-series data, only a limited number of present works aim for causal discovery in general graph-structured data." }, { "text": "Time-series based methods cannot be directly adopted on data with multi-branch trajectory dynamics or spatial structures." } ]
OfMrTfgiHV
VOSRdJqTB
2
OfMrTfgiHV.VOSRdJqTB.02
Our contribution. In this work, we present GEASS (Granger fEAture Selection of Spatiotemporal data), which identifies causally interacting features of high dimensional temporal / spatial data by a single neural network. GEASS considers sparsity of underlying causal mechanisms instead of link sparsity, thus selects most significant corresponding features for downstream causal discovery. Our contributions are three-folds.
Our contribution. In this work, we present GEASS (Granger fEAture Selection of Spatiotemporal data), which identifies causally interacting features of high dimensional temporal / spatial data by a single neural network. GEASS considers the aforementioned feature sparsity instead of edge sparsity, thus selects most significant interacting features for downstream causal discovery. Our contributions are three-folds.
[ { "text": "Our contribution." }, { "text": "In this work, we present GEASS (Granger fEAture Selection of Spatiotemporal data), which identifies causally interacting features of high dimensional temporal / spatial data by a single neural network." }, { "text": "GEASS considers sparsity of underlying causal mechanisms instead of link sparsity, thus selects most significant corresponding features for downstream causal discovery." }, { "text": "Our contributions are three-folds." } ]
[ { "text": "Our contribution." }, { "text": "In this work, we present GEASS (Granger fEAture Selection of Spatiotemporal data), which identifies causally interacting features of high dimensional temporal / spatial data by a single neural network." }, { "text": "GEASS considers the aforementioned feature sparsity instead of edge sparsity, thus selects most significant interacting features for downstream causal discovery." }, { "text": "Our contributions are three-folds." } ]
OfMrTfgiHV
VOSRdJqTB
3
OfMrTfgiHV.VOSRdJqTB.03
Let ( S ∗ 1 , S ∗ 2 ) be one of the maximizers with the smallest size of | S 1 ∪ S 2 | , and denote S ∗ := S ∗ 1 ∪ S ∗(note ( S ∗ 1 , S ∗ 2 ) may not be unique). Under some mild assumptions, we are able to provide the theoretical justification for mTE maximization in the generalized setting of graph-structured data (Theorem 2.4). A proof can be seen in Appendix A.3.
Let ( S ∗ 1 , S ∗ 2 ) be one of the maximizers with the smallest size of | S 1 ∪ S 2 | , and denote S ∗ := S ∗ 1 ∪ S ∗(note ( S ∗ 1 , S ∗ 2 ) may not be unique). Under some mild assumptions listed below, we are able to provide the theoretical justification for mTE maximization in the time-series setting (Theorem 2.4). A proof can be seen in Appendix A.3.
[ { "text": "Let ( S ∗ 1 , S ∗ 2 ) be one of the maximizers with the smallest size of | S 1 ∪ S 2 | , and denote S ∗ := S ∗ 1 ∪ S ∗(note ( S ∗ 1 , S ∗ 2 ) may not be unique)." }, { "text": "Under some mild assumptions, we are able to provide the theoretical justification for mTE maximization in the generalized setting of graph-structured data (Theorem 2.4)." }, { "text": "A proof can be seen in Appendix A.3." } ]
[ { "text": "Let ( S ∗ 1 , S ∗ 2 ) be one of the maximizers with the smallest size of | S 1 ∪ S 2 | , and denote S ∗ := S ∗ 1 ∪ S ∗(note ( S ∗ 1 , S ∗ 2 ) may not be unique)." }, { "text": "Under some mild assumptions listed below, we are able to provide the theoretical justification for mTE maximization in the time-series setting (Theorem 2.4)." }, { "text": "A proof can be seen in Appendix A.3." } ]
OfMrTfgiHV
VOSRdJqTB
4
OfMrTfgiHV.VOSRdJqTB.04
W is defined as the graph diffusion operator: W x i = x N ( i ) . In our construction, T 1 controls the sparsity of feature selection, while T 2 corresponds to overlapping level between S 1 and S 2 . Denoting the Gaussian error function as erf() , the regularization term for the first layer is of form:
W is defined as the graph diffusion operator: W x i = x N ( i ) . In our construction, T 1 controls the sparsity of feature selection, while T 2 controls the expectation of overlap between ˜ X S 1 and ˜ X S 2 . Denoting the Gaussian error function as erf() , the regularization term for the first layer is of form:
[ { "text": "W is defined as the graph diffusion operator:" }, { "text": "W x i =" }, { "text": "x N ( i ) ." }, { "text": "In our construction, T 1 controls the sparsity of feature selection, while T 2 corresponds to overlapping level between S 1 and S 2 ." }, { "text": "Denoting the Gaussian error function as erf() , the regularization term for the first layer is of form:" } ]
[ { "text": "W is defined as the graph diffusion operator:" }, { "text": "W x i =" }, { "text": "x N ( i ) ." }, { "text": "In our construction, T 1 controls the sparsity of feature selection, while T 2 controls the expectation of overlap between ˜ X S 1 and ˜ X S 2 ." }, { "text": "Denoting the Gaussian error function as erf() , the regularization term for the first layer is of form:" } ]
OfMrTfgiHV
VOSRdJqTB
5
OfMrTfgiHV.VOSRdJqTB.05
Upon the algorithm convergence, GEASS provides both outputs of active features and embeddings produced by causally interacting features. In this paper, we emphasize the use of the former as the latter embedding output may be complex and nonlinear, potentially requiring additional architectures to maximize its interpretability.
Upon the algorithm convergence, GEASS provides both outputs of active features ( B 0 ∪ B 1 ) and embeddings ( f, g, h ) produced by causally interacting features. In this paper, we emphasize the use of the identified interacting features B 0 ∪ B 1 . The output of embeddings ( f, g, h ) may be complex and nonlinear, potentially requiring additional architectures to maximize its interpretability.
[ { "text": "Upon the algorithm convergence, GEASS provides both outputs of active features and embeddings produced by causally interacting features." }, { "text": "In this paper, we emphasize the use of the former as the latter embedding output may be complex and nonlinear, potentially requiring additional architectures to maximize its interpretability." } ]
[ { "text": "Upon the algorithm convergence, GEASS provides both outputs of active features ( B 0 ∪ B 1 ) and embeddings ( f, g, h ) produced by causally interacting features." }, { "text": "In this paper, we emphasize the use of the identified interacting features B 0 ∪ B 1 . The output of embeddings ( f, g, h ) may be complex and nonlinear, potentially requiring additional architectures to maximize its interpretability." } ]
OfMrTfgiHV
VOSRdJqTB
6
OfMrTfgiHV.VOSRdJqTB.06
By the construction of GEASS, we are able to get two separate sparse feature subsets as source features and sink features. These features may be used as inputs to further proper causal analysis, such as LPCMCI (Gerhardus and Runge, 2020), which despite its statistical power in depicting possible lags, identifying latent confounders, and allowing nonlinear tests, can only work on data with intermediate feature sizes. Also, these feature themselves may be used in downstream unsupervised analysis to improve generalizations of prediction models.
By the construction of GEASS, we are able to get two separate sparse feature subsets as source features B 1 and sink features B 0 . These features may be used as inputs to further proper causal analysis, such as LPCMCI (Gerhardus and Runge, 2020) for time-series data, which despite its statistical power in depicting possible lags, identifying latent confounders, and allowing nonlinear tests, can only work on data with moderate feature sizes. Also, these features may be used in other machine learning models for improved model interpretability.
[ { "text": "By the construction of GEASS, we are able to get two separate sparse feature subsets as source features and sink features." }, { "text": "These features may be used as inputs to further proper causal analysis, such as LPCMCI (Gerhardus and Runge, 2020), which despite its statistical power in depicting possible lags, identifying latent confounders, and allowing nonlinear tests, can only work on data with intermediate feature sizes." }, { "text": "Also, these feature themselves may be used in downstream unsupervised analysis to improve generalizations of prediction models." } ]
[ { "text": "By the construction of GEASS, we are able to get two separate sparse feature subsets as source features B 1 and sink features B 0 ." }, { "text": "These features may be used as inputs to further proper causal analysis, such as LPCMCI (Gerhardus and Runge, 2020) for time-series data, which despite its statistical power in depicting possible lags, identifying latent confounders, and allowing nonlinear tests, can only work on data with moderate feature sizes." }, { "text": "Also, these features may be used in other machine learning models for improved model interpretability." } ]
OfMrTfgiHV
VOSRdJqTB
7
OfMrTfgiHV.VOSRdJqTB.07
GEASS identifies 50 causally-related features with high biological relevance. For example, the gene list includes the key transcriptional regulator Neurog3 , which is required for the specification of a common precursor of the 4 pancreatic terminal states (uni, 2021). For further quantitative validation, we take advantage of the additional information of RNA velocity, as significant RNA velocity is anecessary condition for causal gene dynamics along the trajectory. Therefore, we benchmark different gene selection methods’ performance on the mean velocity likelihood over selected genes. Our result suggests here GEASS achieves the best performance in selecting dynamical genes, compared with alternative gene selection schemes including high-expressed genes (HEG), highly-variable genes (HVG), and genes having high correlation with inferred latent time (HCG) (Figure 4).
GEASS identifies 50 causally-related features with high biological relevance. For example, the gene list includes the key transcriptional regulator NEUROG3 , which is required for the specification of a common precursor of the 4 pancreatic terminal states (uni, 2021). As the ground truth causal interactions here are unknown, for further quantitative validation, we assume the underlying biological process is driven by a causal cascade of gene interactions, meaning target genes activated in earlier phases of the trajectory further cause downstream gene activation at later phases. In this case, the higher a gene velocity is, the more likely the gene is associated with causal gene-gene relationships. Our benchmarking result here suggests GEASS achieves the best performance in selecting genes with high mean velocity likelihood, compared with alternative gene selection schemes with fixed gene number (50) including high-expressed genes (HEG), highly-variable genes (HVG), and genes having high correlation with inferred latent time (HCG) (Figure 4). Mean RNA velocity likelihood HEG 0. HVG 0. HCG 0.
[ { "text": "GEASS identifies 50 causally-related features with high biological relevance." }, { "text": "For example, the gene list includes the key transcriptional regulator Neurog3 , which is required for the specification of a common precursor of the 4 pancreatic terminal states (uni, 2021)." }, { "text": "For further quantitative validation, we take advantage of the additional information of RNA velocity, as significant RNA velocity is anecessary condition for causal gene dynamics along the trajectory." }, { "text": "Therefore, we benchmark different gene selection methods’ performance on the mean velocity likelihood over selected genes. Our result suggests here GEASS achieves the best performance in selecting dynamical genes, compared with alternative gene selection schemes including high-expressed genes (HEG), highly-variable genes (HVG), and " }, { "text": "genes having high correlation with inferred latent time (HCG) (Figure 4)." } ]
[ { "text": "GEASS identifies 50 causally-related features with high biological relevance." }, { "text": "For example, the gene list includes the key transcriptional regulator NEUROG3 , which is required for the specification of a common precursor of the 4 pancreatic terminal states (uni, 2021)." }, { "text": "As the ground truth causal interactions here are unknown, for further quantitative validation, we assume the underlying biological process is driven by a causal cascade of gene interactions, meaning target genes activated in earlier phases of the trajectory further cause downstream gene activation at later phases. In this case, the higher a gene velocity is, the more likely the gene is associated with causal gene-gene relationships." }, { "text": "Our benchmarking result here suggests GEASS achieves the best performance in selecting genes with high mean velocity likelihood, compared with alternative gene selection schemes with fixed gene number (50) including high-expressed genes (HEG), highly-variable genes (HVG), and genes having high correlation with inferred latent time (HCG) (Figure 4)." }, { "text": "Mean RNA velocity likelihood HEG 0. HVG 0. HCG 0." } ]
zVTBZNFW
SS6Ju2-8
0
zVTBZNFW.SS6Ju2-8.00
• We propose an effective and efficient algorithm for verifying the robustness of Transformers with self-attention layers. To our best knowledge, this is the first method for verifying Transformers. • We resolve key challenges in verifying Transformers, including cross-nonlinearity and cross-position dependency. Our bounds are significantly tighter than those by adapting Interval Bound Propagation (IBP) (Mirman et al., 2018; Gowal et al., 2018). • We quantitatively and qualitatively show that the certified lower bounds consistently reflect the importance of input words in sentiment analysis, which justifies that the computed bounds are meaningful in practice.
• We propose an effective and efficient algorithm for verifying the robustness of Transformers with self-attention layers. To our best knowledge, this is the first method for verifying Transformers. • We resolve key challenges in verifying Transformers, including cross-nonlinearity and cross-position dependency. Our bounds are significantly tighter than those by adapting Interval Bound Propagation (IBP) (Mirman et al., 2018; Gowal et al., 2018). • We quantitatively and qualitatively show that the certified bounds computed by our algorithm consistently reflect the importance of input words in sentiment analysis, which justifies that these bounds are meaningful in practice and they shed light on interpreting Transformers.
[ { "text": "• We propose an effective and efficient algorithm for verifying the robustness of Transformers with self-attention layers." }, { "text": "To our best knowledge, this is the first method for verifying Transformers." }, { "text": "• We resolve key challenges in verifying Transformers, including cross-nonlinearity and cross-position dependency." }, { "text": "Our bounds are significantly tighter than those by adapting Interval Bound Propagation (IBP) (Mirman et al., 2018; Gowal et al., 2018)." }, { "text": "• We quantitatively and qualitatively show that the certified lower bounds consistently reflect the importance of input words in sentiment analysis, which justifies that the computed bounds are meaningful in practice." } ]
[ { "text": "• We propose an effective and efficient algorithm for verifying the robustness of Transformers with self-attention layers." }, { "text": "To our best knowledge, this is the first method for verifying Transformers." }, { "text": "• We resolve key challenges in verifying Transformers, including cross-nonlinearity and cross-position dependency." }, { "text": "Our bounds are significantly tighter than those by adapting Interval Bound Propagation (IBP) (Mirman et al., 2018; Gowal et al., 2018)." }, { "text": "• We quantitatively and qualitatively show that the certified bounds computed by our algorithm consistently reflect the importance of input words in sentiment analysis, which justifies that these bounds are meaningful in practice and they shed light on interpreting Transformers." } ]
ExEV2Ndm5V
8bRDS2G2z2
0
ExEV2Ndm5V.8bRDS2G2z2.00
Figure 2 illustrates (6) with complementary checkerboard masks J c and J , where f and g use almost equal amount of information, and its variantthat computesthe MSE only on J ∈ J . The variant can be used fordenoiser learning with | J c | (cid:29) | J | , where f uses much more information than g. Our arguments based on Theorem 2 also apply to this variant, using the second result in Theorem 2.
Figure 2 illustrates (7) with complementary checkerboard masks J c and J , where f and g use almost equal amount of information, and its variant, where g uses much less information than f . The variant computes MSE only on J ∈ J ; in this setup, it is challenging for g to predict the entire image.
[ { "text": "Figure 2 illustrates (6) with complementary checkerboard masks J c and J , where f and g use almost equal amount of information, and its variantthat computesthe MSE only on J ∈ J . The variant can be used fordenoiser learning with | J c | (cid:29) | J | , where f uses much more information than g. Our arguments based on Theorem 2 also apply to this variant, using the second result in Theorem 2." } ]
[ { "text": "Figure 2 illustrates (7) with complementary checkerboard masks J c and J , where f and g use almost equal amount of information, and its variant, where g uses much less information than f . The variant computes MSE only on J ∈ J ; in this setup, it is challenging for g to predict the entire image." } ]
ExEV2Ndm5V
8bRDS2G2z2
1
ExEV2Ndm5V.8bRDS2G2z2.01
We evaluated proposed SSRL in two extreme imaging applications in Section 3 with both simulated and intrinsically noisy datasets. For these applications, we compared the performances of the following self-supervised denoising methods: Noise2Self (Batson & Royer, 2019), Noise2Noise-motivated methods that emulate pairs of two independent noisy images – Neighbor2Neighbor (Huang et al., 2021) or Noise2Inverse (Hendriksen et al., 2020) – Noise2Same (Xie et al., 2020), and corresponding SSRL to each aforementioned method. We also included the Noise2True (1) results as baseline.
We evaluated proposed SSRL in two practical imaging applications in Section 3 with both simulated and real-world datasets. For these applications, we mainly focuses on comparisons with selfsupervised denoising methods using single noisy input samples, particularly when statistical noise parameters are unavailable. We compared the performances of the following methods: Noise2Self (Batson & Royer, 2019), Noise2Noise-motivated methods that emulate pairs of two independent noisy images – Neighbor2Neighbor (Huang et al., 2021) or Noise2Inverse (Hendriksen et al., 2020) – Noise2Same (Xie et al., 2020), and corresponding SSRL to each aforementioned method. We also included Noise2True results as baseline.
[ { "text": "We evaluated proposed SSRL in two extreme imaging applications in Section 3 with both simulated and intrinsically noisy datasets." }, { "text": "For these applications, we compared the performances of the following self-supervised denoising methods: Noise2Self (Batson & Royer, 2019), Noise2Noise-motivated methods that emulate pairs of two independent noisy images – Neighbor2Neighbor (Huang et al., 2021) or Noise2Inverse (Hendriksen et al., 2020) –" }, { "text": " Noise2Same (Xie et al., 2020), and corresponding SSRL to each aforementioned method." }, { "text": "We also included the Noise2True (1) results as baseline." } ]
[ { "text": "We evaluated proposed SSRL in two practical imaging applications in Section 3 with both simulated and real-world datasets." }, { "text": "For these applications, we mainly focuses on comparisons with selfsupervised denoising methods using single noisy input samples, particularly when statistical noise parameters are unavailable. We compared the performances of the following methods: Noise2Self (Batson & Royer, 2019), Noise2Noise-motivated methods that emulate pairs of two independent noisy images – Neighbor2Neighbor (Huang et al., 2021) or Noise2Inverse (Hendriksen et al., 2020)" }, { "text": "– Noise2Same (Xie et al., 2020), and corresponding SSRL to each aforementioned method." }, { "text": "We also included Noise2True results as baseline." } ]
ExEV2Ndm5V
8bRDS2G2z2
2
ExEV2Ndm5V.8bRDS2G2z2.02
The ImageNet ILSVRC 2012 Val and BSD 300 datasets have the Custom license (research, non-commercial), the Set 5 data has the Unknown license, and the SIDD sRGB Data (Abdelhamed et al., 2018) has the MIT license. We obtained The 2016 Low Dose CT Grand Challenge data (McCollough, 2016) by signing a data sharing agreement, and Low Dose CT Image and Projection Data (Moen et al., 2021) from https://doi.org/10.7937/9npb-2637 .
The ImageNet ILSVRC 2012 Val and BSD 300 datasets have the Custom license (research, non-commercial), the Set 5 data has the Unknown license, and the SIDD sRGB Data (Abdelhamed et al., 2018) has the MIT license. We obtained The 2016 Low Dose CT Grand Challenge data (McCollough, 2016) from https://aapm.app.box.com/s/ eaw4jddb53keg1bptavvvd1sf4x3pe9h/file/856956352254 , and Low Dose CT Image and Projection Data (Moen et al., 2021) from https://doi.org/10.7937/9npb-2637 .
[ { "text": "The ImageNet ILSVRC 2012 Val and BSD 300 datasets have the Custom license (research, non-commercial), the Set 5 data has the Unknown license, and the SIDD sRGB Data (Abdelhamed et al., 2018) has the MIT license." }, { "text": "We obtained The 2016 Low Dose CT Grand Challenge data (McCollough, 2016) by signing a data sharing agreement, and Low Dose CT Image and Projection Data (Moen et al., 2021) from https://doi.org/10.7937/9npb-2637 ." } ]
[ { "text": "The ImageNet ILSVRC 2012 Val and BSD 300 datasets have the Custom license (research, non-commercial), the Set 5 data has the Unknown license, and the SIDD sRGB Data (Abdelhamed et al., 2018) has the MIT license." }, { "text": "We obtained The 2016 Low Dose CT Grand Challenge data (McCollough, 2016) from https://aapm.app.box.com/s/ eaw4jddb53keg1bptavvvd1sf4x3pe9h/file/856956352254 , and Low Dose CT Image and Projection Data (Moen et al., 2021) from https://doi.org/10.7937/9npb-2637 ." } ]
B5q46KKc3U
1vmYbZUb8
0
B5q46KKc3U.1vmYbZUb8.00
Cross-modal learning is widely studied in many areas, including cross-modal retrieval [60, 56, 25],transfer learning [40, 37], and domain adaptation [45, 29], in which different modalities/domains come from different distributions. The main challenge of cross-modal learning is to use given vision-and-language pairs to learn common representations shared between modalities. Most existing works of text-video retrieval [6, 24, 8, 7, 21] map text and video to the same latent space, where the similarity between them can be directly calculated [15, 20, 18, 4, 51, 14, 53, 54, 5, 12, 58, 55, 41, 23].
Cross-modal learning is widely studied in many areas, including cross-modal retrieval [77, 68, 24], transfer learning [51, 46], domain adaptation [57, 34], and captioning [39] in which different modalities/domains come from different distributions. The main challenge of cross-modal learning is to use given vision-and-language pairs to learn common representations shared between modalities [38]. Most existing works of text-video retrieval [9, 23, 22] map text and video to the same latent space, where the similarity between them can be directly calculated [20, 6, 64, 17, 66, 67, 7, 15, 74, 53].
[ { "text": "Cross-modal learning is widely studied in many areas, including cross-modal retrieval [60, 56, 25],transfer learning [40, 37], and domain adaptation [45, 29], in which different modalities/domains come from different distributions." }, { "text": "The main challenge of cross-modal learning is to use given vision-and-language pairs to learn common representations shared between modalities." }, { "text": "Most existing works of text-video retrieval [6, 24, 8, 7, 21] map text and video to the same latent space, where the similarity between them can be directly calculated [15, 20, 18, 4, 51, 14, 53, 54, 5, 12, 58, 55, 41, 23]." } ]
[ { "text": "Cross-modal learning is widely studied in many areas, including cross-modal retrieval [77, 68, 24], transfer learning [51, 46], domain adaptation [57, 34], and captioning [39] in which different modalities/domains come from different distributions." }, { "text": "The main challenge of cross-modal learning is to use given vision-and-language pairs to learn common representations shared between modalities [38]." }, { "text": "Most existing works of text-video retrieval [9, 23, 22] map text and video to the same latent space, where the similarity between them can be directly calculated [20, 6, 64, 17, 66, 67, 7, 15, 74, 53]." } ]
B5q46KKc3U
1vmYbZUb8
1
B5q46KKc3U.1vmYbZUb8.01
Since the input and output dimensions of the EMCL module are the same, it is model-agnostic and can be applied to features extracted from any language and video encoders. Therefore, we further equip our EMCL with three strong baseline models, i.e., MMT[19], CLIP4Clip [32], DCR [50], and evaluate the performance of these models on the MSR-VTT datasets. Specifically, we insert the EMCL module at the end of the video-text encoders. Table 5shows that our EMCL can be applied to successfully boost all baselines either as a jointly traininglayer (Table 5(a)) or an out-of-the-box inference module with no extra training (Table 5(b)). Thesignificant improvements demonstrate the generalization ability of EMCL. We perform the analysis on the Text->Video task. B is the sample size. D is the dimension of the original feature. K is the number of subspaces.
Since the input and output dimensions of the EMCL module are the same, it is model-agnostic and can be applied to features extracted from any language and video encoders. Therefore, we further equip our EMCL with three strong baseline models, i.e., MMT [21], CLIP4Clip [42], DCR [63], and evaluate the performance of these models on the MSR-VTT datasets. Specifically, we insert the EMCL module at the end of the video-text encoders. Tableshows that our EMCL can be applied to successfully boost all baselines either as a jointly training layer or an out-of-the-box inference module with no extra training. Overall, our approach can boost the baselines with the most significant improvement up to 3.5% and 4.2% for text-to-video task and video-to-text task in terms of R@1 metric, respectively.
[ { "text": "Since the input and output dimensions of the EMCL module are the same, it is model-agnostic and can be applied to features extracted from any language and video encoders." }, { "text": "Therefore, we further equip our EMCL with three strong baseline models, i.e., MMT[19], CLIP4Clip [32], DCR [50], and evaluate the performance of these models on the MSR-VTT datasets." }, { "text": "Specifically, we insert the EMCL module at the end of the video-text encoders." }, { "text": "Table 5shows that our EMCL can be applied to successfully boost all baselines either as a jointly traininglayer (Table 5(a)) or an out-of-the-box inference module with no extra training (Table 5(b)). Thesignificant improvements demonstrate the generalization ability of EMCL." }, { "text": "We perform the analysis on the Text->Video task. B is the sample size. D is the dimension of the original feature. K is the number of subspaces." } ]
[ { "text": "Since the input and output dimensions of the EMCL module are the same, it is model-agnostic and can be applied to features extracted from any language and video encoders." }, { "text": "Therefore, we further equip our EMCL with three strong baseline models, i.e., MMT [21], CLIP4Clip [42], DCR [63], and evaluate the performance of these models on the MSR-VTT datasets." }, { "text": "Specifically, we insert the EMCL module at the end of the video-text encoders." }, { "text": "Tableshows that our EMCL can be applied to successfully boost all baselines either as a jointly training layer or an out-of-the-box inference module with no extra training." }, { "text": "Overall, our approach can boost the baselines with the most significant improvement up to 3.5% and 4.2% for text-to-video task and video-to-text task in terms of R@1 metric, respectively." } ]
ZR42POFYw
McI__7Y94K
0
ZR42POFYw.McI__7Y94K.00
Eq. (5) captures the idea that CL algorithms should perform consistently across arbitrary schedules over the same dataset D .
Eq. (5) captures the idea that CL algorithms should perform identically for all data schedules. In practice, however, inherent randomness of learning algorithms often leads to similar but not identical performance. We thus further introduce schedule-robustness as a relaxation of schedule-invariance:
[ { "text": "Eq." }, { "text": "(5) captures the idea that CL algorithms should perform consistently across arbitrary schedules over the same dataset D ." } ]
[ { "text": "Eq." }, { "text": "(5) captures the idea that CL algorithms should perform identically for all data schedules. In practice, however, inherent randomness of learning algorithms often leads to similar but not identical performance. We thus further introduce schedule-robustness as a relaxation of schedule-invariance:" } ]
ZR42POFYw
McI__7Y94K
1
ZR42POFYw.McI__7Y94K.01
This process aims to keep each M t balanced class-wise, hence the final M T attains a similar class distribution after D is fully observed, irrespective of the specific schedule (we refer to Appendix B. for empirical evidence in support of this observation). In particular, when we present one entire class at a time (namely no X y is split across multiple batches) such as in few shot CL (Antoniou et al., 2020), exemplar buffering is invariant to the order in which classes are presented . Thus, adapting f T with M T yields a schedule-robust f ∗ . Below we consider two adaptation strategies.
This process aims to keep each M t balanced class-wise, hence the final M T attains a similar class distribution after D is fully observed, irrespective of the specific schedule. In particular, it is easy to show that M T is schedule-invariant if the data schedule present one entire class at a time (namely no X y is split across multiple batches), since the replay exemplars selected for each class is deterministic for such schedules. More generally, adapting f T with M T yields a schedule-robust f ∗ . We will further discuss schedule-robustness with respect to smaller batch sizes in Appendix B.4. Below we consider two adaptation strategies.
[ { "text": "This process aims to keep each M t balanced class-wise, hence the final M T attains a similar class distribution after D is fully observed, irrespective of the specific schedule (we refer to Appendix B. for empirical evidence in support of this observation)." }, { "text": "In particular, when we present one entire class at a time (namely no X y is split across multiple batches) such as in few shot CL (Antoniou et al., 2020), exemplar buffering is invariant to the order in which classes are presented ." }, { "text": "Thus, adapting f T with M T yields a schedule-robust f ∗ ." }, { "text": "" }, { "text": "Below we consider two adaptation strategies." } ]
[ { "text": "This process aims to keep each M t balanced class-wise, hence the final M T attains a similar class distribution after D is fully observed, irrespective of the specific schedule." }, { "text": "In particular, it is easy to show that M T is schedule-invariant if the data schedule present one entire class at a time (namely no X y is split across multiple batches), since the replay exemplars selected for each class is deterministic for such schedules." }, { "text": "More generally, adapting f T with M T yields a schedule-robust f ∗ ." }, { "text": "We will further discuss schedule-robustness with respect to smaller batch sizes in Appendix B.4." }, { "text": "Below we consider two adaptation strategies." } ]
ZR42POFYw
McI__7Y94K
2
ZR42POFYw.McI__7Y94K.02
SCROLL is schedule-robust by design and empirically outperforms existing methods across all evaluated schedules, even those for which they were originally optimized. We now focus on two key aspects that emerged from our analysis, which we believe might provide useful insights into future work on CL:
SCROLL is schedule-robust by design and outperforms existing methods across all evaluated schedules. We now focus on two key aspects that emerged from this work, which we believe provide useful insights for CL:
[ { "text": "SCROLL is schedule-robust by design and empirically outperforms existing methods across all evaluated schedules, even those for which they were originally optimized." }, { "text": "We now focus on two key aspects that emerged from our analysis, which we believe might provide useful insights into future work on CL:" } ]
[ { "text": "SCROLL is schedule-robust by design and outperforms existing methods across all evaluated schedules." }, { "text": "We now focus on two key aspects that emerged from this work, which we believe provide useful insights for CL:" } ]
ZR42POFYw
McI__7Y94K
3
ZR42POFYw.McI__7Y94K.03
• In Appendix A we comment on the schedule-robustness property of Ridge Regression; • In Appendix B we present additional empirical investigation on the behavior of SCROLL and its main components; • In Appendix C we report the implementation details of SCROLL used in our experiments; • Finally, in Appendix D we provide more information on model tuning and baseline methods.
• In Appendix A we comment on the schedule-robustness property of Ridge Regression; • In Appendix B we present additional empirical investigation on the behavior of SCROLL and its main components; • In Appendix C we report the implementation details of SCROLL used in our experiments; • In Appendix D we provide more information on model tuning and baseline methods. • In Appendix E we discuss in more detail the definition of schedule-robustness introduced in Sec. 2.2 providing a more formal definition of this notion.
[ { "text": "• In Appendix A we comment on the schedule-robustness property of Ridge Regression; •" }, { "text": "In Appendix B we present additional empirical investigation on the behavior of SCROLL and its main components; •" }, { "text": "In Appendix C we report the implementation details of SCROLL used in our experiments; • Finally, in Appendix D we provide more information on model tuning and baseline methods. " } ]
[ { "text": "• In Appendix A we comment on the schedule-robustness property of Ridge Regression; •" }, { "text": "In Appendix B we present additional empirical investigation on the behavior of SCROLL and its main components; •" }, { "text": "In Appendix C we report the implementation details of SCROLL used in our experiments; • In Appendix D we provide more information on model tuning and baseline methods. • In Appendix E we discuss in more detail the definition of schedule-robustness introduced in Sec. 2.2 providing a more formal definition of this notion." } ]
ZR42POFYw
McI__7Y94K
4
ZR42POFYw.McI__7Y94K.04
From (14), it is clear that both X ⊤ X and X ⊤ Y are invariant to the ordering of samples in D . Therefore computing W ∗ is also invariant to sample ordering in D . We thus conclude that Ridge Regression is schedule-robust. Since Ridge Regression learns each w i independently, it also mitigates catastrophic forgetting caused by class interference.
From (15), it is clear that both X ⊤ X and X ⊤ Y , as summations, are invariant to the ordering of samples in D . Therefore computing W ∗ is also invariant to sample ordering in D and may be performed online using (10). We thus conclude that Ridge Regression is schedule-invariant. Since Ridge Regression learns each w i independently, it also mitigates catastrophic forgetting caused by class interference.
[ { "text": "From (14), it is clear that both X ⊤ X and X ⊤ Y are invariant to the ordering of samples in D ." }, { "text": "Therefore computing W ∗ is also invariant to sample ordering in D ." }, { "text": "We thus conclude that Ridge Regression is schedule-robust." }, { "text": "Since Ridge Regression learns each w i independently, it also mitigates catastrophic forgetting caused by class interference." } ]
[ { "text": "From (15), it is clear that both X ⊤ X and X ⊤ Y , as summations, are invariant to the ordering of samples in D ." }, { "text": "Therefore computing W ∗ is also invariant to sample ordering in D and may be performed online using (10)." }, { "text": "We thus conclude that Ridge Regression is schedule-invariant." }, { "text": "Since Ridge Regression learns each w i independently, it also mitigates catastrophic forgetting caused by class interference." } ]
h4oxuOBuL
N3YakT0jH
0
h4oxuOBuL.N3YakT0jH.00
Dataset We construct a variant of the Taskonomy dataset (Zamir et al., 2018) to simulate fewshot learning of unseen dense prediction tasks. Taskonomy contains indoor images with various annotations, where we choose ten dense prediction tasks of diverse semantics and output dimensions: semantic segmentation (SS), surface normal (SN), Euclidean distance (ED), Z-buffer depth (ZD), texture edge (TE), occlusion edge (OE), 2D keypoints (K2), 3D keypoints (K3), reshading (RS), and principal curvature (PC), 2 . We partition the ten tasks to construct a 5 -fold split, in each of which two tasks are used for few-shot evaluation ( T test ) and the remaining eight are used for training ( T train ). To perform evaluation on tasks of novel semantics, we carefully construct the partition such that tasks for training and test are sufficiently different from each other e.g. , by grouping edge tasks (TE, OE) together as test tasks. The split is shown in Table 1. We process some single-channel tasks (ED, TE, OE) to multiple channels to increase taskdiversity, and standardize all labels to [0 , 1] . Additional details are in Appendix A.
Dataset We construct a variant of the Taskonomy dataset (Zamir et al., 2018) to simulate fewshot learning of unseen dense prediction tasks. Taskonomy contains indoor images with various annotations, where we choose ten dense prediction tasks of diverse semantics and output dimensions: semantic segmentation (SS), surface normal (SN), Euclidean distance (ED), Z-buffer depth (ZD), texture edge (TE), occlusion edge (OE), 2D keypoints (K2), 3D keypoints (K3), reshading (RS), and principal curvature (PC), 2 . We partition the ten tasks to construct a 5 -fold split, in each of which two tasks are used for few-shot evaluation ( T test ) and the remaining eight are used for training ( T train ). To perform evaluation on tasks of novel semantics, we carefully construct the partition such that tasks for training and test are sufficiently different from each other e.g. , by grouping edge tasks (TE, OE) together as test tasks. The split is shown in Table 1. We provide analysis on the number of training tasks in Appendix C.1.4. We also investigate an incomplete setting where images are not associated with whole training task labels in Appendix C.1.5. Additional details are in Appendix A.
[ { "text": "Dataset We construct a variant of the Taskonomy dataset (Zamir et al., 2018) to simulate fewshot learning of unseen dense prediction tasks." }, { "text": "Taskonomy contains indoor images with various annotations, where we choose ten dense prediction tasks of diverse semantics and output dimensions: semantic segmentation (SS), surface normal (SN), Euclidean distance (ED), Z-buffer depth (ZD), texture edge (TE), occlusion edge (OE), 2D keypoints (K2), 3D keypoints (K3), reshading (RS), and principal curvature (PC), 2 ." }, { "text": "We partition the ten tasks to construct a 5 -fold split, in each of which two tasks are used for few-shot evaluation ( T test ) and the remaining eight are used for training ( T train )." }, { "text": "To perform evaluation on tasks of novel semantics, we carefully construct the partition such that tasks for training and test are sufficiently different from each other e.g. , by grouping edge tasks (TE, OE) together as test tasks." }, { "text": "The split is shown in Table 1." }, { "text": "We process some single-channel tasks (ED, TE, OE) to multiple channels to increase taskdiversity, and standardize all labels to [0 , 1] ." }, { "text": "Additional details are in Appendix A." } ]
[ { "text": "Dataset We construct a variant of the Taskonomy dataset (Zamir et al., 2018) to simulate fewshot learning of unseen dense prediction tasks." }, { "text": "Taskonomy contains indoor images with various annotations, where we choose ten dense prediction tasks of diverse semantics and output dimensions: semantic segmentation (SS), surface normal (SN), Euclidean distance (ED), Z-buffer depth (ZD), texture edge (TE), occlusion edge (OE), 2D keypoints (K2), 3D keypoints (K3), reshading (RS), and principal curvature (PC), 2 ." }, { "text": "We partition the ten tasks to construct a 5 -fold split, in each of which two tasks are used for few-shot evaluation ( T test ) and the remaining eight are used for training ( T train )." }, { "text": "To perform evaluation on tasks of novel semantics, we carefully construct the partition such that tasks for training and test are sufficiently different from each other e.g. , by grouping edge tasks (TE, OE) together as test tasks." }, { "text": "The split is shown in Table 1." }, { "text": "We provide analysis on the number of training tasks in Appendix C.1.4. We also investigate an incomplete setting where images are not associated with whole training task labels in Appendix C.1.5." }, { "text": "Additional details are in Appendix A." } ]
Hyf9C44oB
xeE1TxVin
0
Hyf9C44oB.xeE1TxVin.00
Later, Rashid et al. (2018) relaxed the linear assumption in VDN by assuming that the Q-values of individual agents and the global one are also monotonic, and proposed QMIX employing a network that estimates joint action-values as a complex non-linear combination of per-agent values. Recently, Zambaldi et al. (2019) proposed the relational deep RL to learn environmental entities relations. However, they considered the entity relations on the pixel-level of raw visual data, which ignores the natural property of the influence of actions between agents. Tacchetti et al. (2019) proposed a novel network architecture called Relational Forward Model (RFM) for predictive modeling in multiagent learning. RFM takes a semantic description of the state of an environment as input, and outputs either an action prediction for each agent or a prediction of the cumulative reward of an episode. However, RFM does not consider from the perspective of the influence of each action on other agents. OpenAI (2018) designed network structures to address multiagent learning problems in a famous Multiplayer Online Battle Arena (MOBA), Dota2. They used a scaled-up version of PPO (Schulman et al. (2017)),divided the selection of the action type and the target unit to decrease the parameter dimensions of large-scale action space, while in many cases, the selection of action type and target unit should be considered simultaneously. They adopted the attention mechanism to compute the weight of choosing the target unit. However, the network input arbitrarily selects the first 64 units of information from each group, which does not consider the influence of each action on other agents. Recently, Iqbal & Sha (2019) proposed a multi-actor-attention-critic network (MAAC) for multiagent learning. MAAC modifies the centralized critic network using the attention mechanism to facilitate more effective and scalable learning in complex multiagent environments. There are also a number of works designing network structures for multiagent communication (Sukhbaatar et al., 2016; Singh et al., 2019).
Later, Rashid et al. (2018) relaxed the linear assumption in VDN by assuming that the Q-values of individual agents and the global one are also monotonic, and proposed QMIX employing a network that estimates joint action-values as a complex non-linear combination of per-agent values. Recently, Zambaldi et al. (2019) proposed the relational deep RL to learn environmental entities relations. However, they considered the entity relations on the pixel-level of raw visual data, which ignores the natural property of the influence of actions between agents. Tacchetti et al. (2019) proposed a novel network architecture called Relational Forward Model (RFM) for predictive modeling in multiagent learning. RFM takes a semantic description of the state of an environment as input, and outputs either an action prediction for each agent or a prediction of the cumulative reward of an episode. However, RFM does not consider from the perspective of the influence of each action on other agents. OpenAI designed network structures to address multiagent learning problems in a famous Multiplayer Online Battle Arena (MOBA), Dota2. They used a scaled-up version of PPO (Schulman et al. (2017)), adopted the attention mechanism to compute the weight of choosing the target unit, with some of information selected from all information as input. However, this selection is not considered from the influence of each action on other agents. There are also a number of works designing network structures for multiagent communication (Sukhbaatar et al., 2016; Singh et al., 2019).
[ { "text": "Later, Rashid et al." }, { "text": "(2018) relaxed the linear assumption in VDN by assuming that the Q-values of individual agents and the global one are also monotonic, and proposed QMIX employing a network that estimates joint action-values as a complex non-linear combination of per-agent values." }, { "text": "Recently, Zambaldi et al." }, { "text": "(2019) proposed the relational deep RL to learn environmental entities relations." }, { "text": "However, they considered the entity relations on the pixel-level of raw visual data, which ignores the natural property of the influence of actions between agents." }, { "text": "Tacchetti et al." }, { "text": "(2019) proposed a novel network architecture called Relational Forward Model (RFM) for predictive modeling in multiagent learning." }, { "text": "RFM takes a semantic description of the state of an environment as input, and outputs either an action prediction for each agent or a prediction of the cumulative reward of an episode." }, { "text": "However, RFM does not consider from the perspective of the influence of each action on other agents." }, { "text": "OpenAI (2018) designed network structures to address multiagent learning problems in a famous Multiplayer Online Battle Arena (MOBA), Dota2." }, { "text": "They used a scaled-up version of PPO (Schulman et al." }, { "text": "(2017)),divided the selection of the action type and the target unit to decrease the parameter dimensions of large-scale action space, while in many cases, the selection of action type and target unit should be considered simultaneously. They adopted the attention mechanism to compute the weight of choosing the target unit." }, { "text": "However, the network input arbitrarily selects the first 64 units of information from each group, which does not consider the influence of each action on other agents." }, { "text": "Recently, Iqbal & Sha (2019) proposed a multi-actor-attention-critic network (MAAC) for multiagent learning." }, { "text": "MAAC modifies the centralized critic network using the attention mechanism to facilitate more effective and scalable learning in complex multiagent environments." }, { "text": "There are also a number of works designing network structures for multiagent communication (Sukhbaatar et al., 2016; Singh et al., 2019)." } ]
[ { "text": "Later, Rashid et al." }, { "text": "(2018) relaxed the linear assumption in VDN by assuming that the Q-values of individual agents and the global one are also monotonic, and proposed QMIX employing a network that estimates joint action-values as a complex non-linear combination of per-agent values." }, { "text": "Recently, Zambaldi et al." }, { "text": "(2019) proposed the relational deep RL to learn environmental entities relations." }, { "text": "However, they considered the entity relations on the pixel-level of raw visual data, which ignores the natural property of the influence of actions between agents." }, { "text": "Tacchetti et al." }, { "text": "(2019) proposed a novel network architecture called Relational Forward Model (RFM) for predictive modeling in multiagent learning." }, { "text": "RFM takes a semantic description of the state of an environment as input, and outputs either an action prediction for each agent or a prediction of the cumulative reward of an episode." }, { "text": "However, RFM does not consider from the perspective of the influence of each action on other agents." }, { "text": "OpenAI designed network structures to address multiagent learning problems in a famous Multiplayer Online Battle Arena (MOBA), Dota2." }, { "text": "They used a scaled-up version of PPO (Schulman et al." }, { "text": "(2017)), adopted the attention mechanism to compute the weight of choosing the target unit, with some of information selected from all information as input." }, { "text": "However, this selection is not considered from the influence of each action on other agents." }, { "text": "" }, { "text": "" }, { "text": "There are also a number of works designing network structures for multiagent communication (Sukhbaatar et al., 2016; Singh et al., 2019)." } ]
MlsTroGskF
pITyXorUI
0
MlsTroGskF.pITyXorUI.00
Measuring effective gradient flow We conduct large scale experiments to evaluate the role of regularization, optimization and architecture choices on sparse models. We evaluate multiple datasets and architectures and propose a new measure of gradient flow, Effective Gradient Flow ( EGF ), that we show to be a stronger predictor of top-line metrics such as accuracy and loss than current gradient flow formulations. Not all optimizers and regulizers are created equal Weight decay and data augmentation can hurt sparse network optimization, particularly when used in conjunction with accelerating, adaptive optimization methods that use an exponentially decaying average of past squared gradients, such as Adam (Kingma & Ba, 2014) and RMSProp (Hinton et al., 2012). We show this is highly correlated to a high EGF (gradient flow). Batch normalization plays a disproportionate role in stabilizing sparse networks We show that batch normalization is more important for sparse networks than it is for dense networks, which suggests that gradient instability is a key obstacle to starting sparse. Changing activation functions can benefit sparse networks We benchmark a wide set of activation functions, specifically ReLU (Nair & Hinton, 2010) and non-saturating activation functions such as PReLU (He et al., 2015), ELU (Clevert et al., 2015), SReLU (Jin et al., 2015),Swish (Ramachandran et al., 2017) and Sigmoid (Neal, 1992). Our results show that when using adaptive optimization methods, Swish is a promising activation, while when using stochastic gradient descent, PReLU preforms better than the other activation functions.
Measuring effective gradient flow We conduct large scale experiments to evaluate the role of regularization, optimization and architecture choices on sparse models. We evaluate multiple datasets and architectures and propose a new measure of gradient flow, Effective Gradient Flow ( EGF ), that we show to be a stronger predictor of top-line metrics such as accuracy and loss than current gradient flow formulations. Batch normalization plays a disproportionate role in stabilizing sparse networks We show that batch normalization is more important for sparse networks than it is for dense networks, which suggests that gradient instability is a key obstacle to starting sparse. Not all optimizers and regulizers are created equal Weight decay and data augmentation can hurt sparse network optimization, particularly when used in conjunction with accelerating, adaptive optimization methods that use an exponentially decaying average of past squared gradients, such as Adam (Kingma & Ba, 2014) and RMSProp (Hinton et al., 2012). We show this is highly correlated to a high EGF (gradient flow) and how batch normalization helps stabilize EGF . 4. Changing activation functions can benefit sparse networks We benchmark a wide set of activation functions, specifically ReLU (Nair & Hinton, 2010) and non-saturating activation functions such as PReLU (He et al., 2015), ELU (Clevert et al., 2015), SReLU (Jin et al., 2015),Swish (Ramachandran et al., 2017) and Sigmoid (Neal, 1992). Our results show that when using adaptive optimization methods, Swish is a promising activation function, while when using stochastic gradient descent, PReLU preforms better than the other activation functions.
[ { "text": "Measuring effective gradient flow We conduct large scale experiments to evaluate the role of regularization, optimization and architecture choices on sparse models." }, { "text": "We evaluate multiple datasets and architectures and propose a new measure of gradient flow, Effective Gradient Flow ( EGF ), that we show to be a stronger predictor of top-line metrics such as accuracy and loss than current gradient flow formulations." }, { "text": "" }, { "text": "Not all optimizers and regulizers are created equal Weight decay and data augmentation can hurt sparse network optimization, particularly when used in conjunction with accelerating, adaptive optimization methods that use an exponentially decaying average of past squared gradients, such as Adam (Kingma & Ba, 2014) and" }, { "text": "RMSProp (Hinton et al., 2012)." }, { "text": "We show this is highly correlated to a high EGF (gradient flow)." }, { "text": "Batch normalization plays a disproportionate role in stabilizing sparse networks We show that batch normalization is more important for sparse networks than it is for dense networks, which suggests that gradient instability is a key obstacle to starting sparse." }, { "text": "Changing activation functions can benefit sparse networks We benchmark a wide set of activation functions, specifically ReLU (Nair & Hinton, 2010) and non-saturating activation functions such as PReLU (He et al., 2015), ELU (Clevert et al., 2015), SReLU (Jin et al., 2015),Swish (Ramachandran et al., 2017) and Sigmoid (Neal, 1992)." }, { "text": "Our results show that when using adaptive optimization methods, Swish is a promising activation, while when using stochastic gradient descent, PReLU preforms better than the other activation functions." } ]
[ { "text": "Measuring effective gradient flow We conduct large scale experiments to evaluate the role of regularization, optimization and architecture choices on sparse models." }, { "text": "We evaluate multiple datasets and architectures and propose a new measure of gradient flow, Effective Gradient Flow ( EGF ), that we show to be a stronger predictor of top-line metrics such as accuracy and loss than current gradient flow formulations." }, { "text": "Batch normalization plays a disproportionate role in stabilizing sparse networks We show that batch normalization is more important for sparse networks than it is for dense networks, which suggests that gradient instability is a key obstacle to starting sparse." }, { "text": "Not all optimizers and regulizers are created equal Weight decay and data augmentation can hurt sparse network optimization, particularly when used in conjunction with accelerating, adaptive optimization methods that use an exponentially decaying average of past squared gradients, such as Adam (Kingma & Ba, 2014) and" }, { "text": "RMSProp (Hinton et al., 2012)." }, { "text": "We show this is highly correlated to a high EGF (gradient flow) and how batch normalization helps stabilize EGF . 4." }, { "text": "" }, { "text": "Changing activation functions can benefit sparse networks We benchmark a wide set of activation functions, specifically ReLU (Nair & Hinton, 2010) and non-saturating activation functions such as PReLU (He et al., 2015), ELU (Clevert et al., 2015), SReLU (Jin et al., 2015),Swish (Ramachandran et al., 2017) and Sigmoid (Neal, 1992)." }, { "text": "Our results show that when using adaptive optimization methods, Swish is a promising activation function, while when using stochastic gradient descent, PReLU preforms better than the other activation functions." } ]
Syvs2BjjB
rkdauEnjB
0
Syvs2BjjB.rkdauEnjB.00
SVHN/Cifar10, respectively). As detailed in Sec. 3, we report ROC AUC (higher is better) as well as our confidence-thresholded test error (Err; lower is better) and robust test error (RErr; lower is better) ; Err is reported on the test sets, and RErr reported on the attacked test examples. More details and experimental results can be found in Appendix B.
SVHN/Cifar10, respectively). As detailed in Sec. 3, we report ROC AUC (higher is better) as well as our confidence-thresholded test error (Err; lower is better) and robust test error (RErr; lower is better) ; Err is reported on the full test sets, while ROC AUC and RErr are reported on the firstattacked test examples. More details and experimental results can be found in Appendix B.
[ { "text": "SVHN/Cifar10, respectively)." }, { "text": "As detailed in Sec. 3, we report ROC AUC (higher is better) as well as our confidence-thresholded test error (Err; lower is better) and robust test error (RErr; lower is" }, { "text": " better) ; Err is reported on the test sets, and RErr reported on the attacked test examples." }, { "text": "More details and experimental results can be found in Appendix B." } ]
[ { "text": "SVHN/Cifar10, respectively)." }, { "text": "As detailed in Sec. 3, we report ROC AUC (higher is better) as well as our confidence-thresholded test error (Err; lower is better) and" }, { "text": "robust test error (RErr; lower is better) ; Err is reported on the full test sets, while ROC AUC and RErr are reported on the firstattacked test examples." }, { "text": "More details and experimental results can be found in Appendix B." } ]
Syvs2BjjB
rkdauEnjB
1
Syvs2BjjB.rkdauEnjB.01
We give additional details on our experimental setup, specifically regarding attacks, training and the used evaluation metrics. Afterwards, we include additional experimental results, including ablation studies, results for 98% true positive rate (TPR), results per attack, results per corruption on MNISTC (Mu & Gilmer, 2019) and Cifar10-C (Hendrycks & Dietterich, 2019) and a comparison with (Pang et al., 2018) . B. A TTACKS Complementary to the description of the projected gradient descent (PGD) attack by Madry et al. (2018) and our adapted attack, we provide a detailed algorithm in Alg. 2.
We give additional details on our experimental setup, specifically regarding attacks, training and the used evaluation metrics. Afterwards, we include additional experimental results, including ablation studies, results for 98% true positive rate (TPR), results per attack, results per corruption on MNISTC (Mu & Gilmer, 2019) and Cifar10-C (Hendrycks & Dietterich, 2019) and a comparison with (Pang et al., 2018) .
[ { "text": "We give additional details on our experimental setup, specifically regarding attacks, training and the used evaluation metrics." }, { "text": "Afterwards, we include additional experimental results, including ablation studies, results for 98% true positive rate (TPR), results per attack, results per corruption on MNISTC (Mu & Gilmer, 2019) and Cifar10-C (Hendrycks & Dietterich, 2019) and a comparison with (Pang et al., 2018) . B. A TTACKS Complementary to the description of the projected gradient descent (PGD) attack by Madry et al. (2018) and our adapted attack, we provide a detailed algorithm in Alg. 2." } ]
[ { "text": "We give additional details on our experimental setup, specifically regarding attacks, training and the used evaluation metrics." }, { "text": "Afterwards, we include additional experimental results, including ablation studies, results for 98% true positive rate (TPR), results per attack, results per corruption on MNISTC (Mu & Gilmer, 2019) and Cifar10-C (Hendrycks & Dietterich, 2019) and a comparison with (Pang et al., 2018) ." } ]
2woIg-xaV
odX6k0T5Kv
0
2woIg-xaV.odX6k0T5Kv.00
The information theoretic framework promises to explain the predictive power of neural networks. In particular, the information plane analysis, which measures mutual information (MI) between input and representation as well as representation and output, should give rich insights into the training process. This approach, however, was shown to strongly depend on the choice of estimator of theMI: measuring discrete MI does not capture the nature of deterministic neural networks and continuous data distributions, and different approaches for discretization arbitrarily change results. On the other hand, measuring continuous MI for a deterministic network is not mathematically meaningful. In this work we show how the stochasticity induced by dropout layers can be utilized to estimate MI in a theoretically sound manner. We demonstrate in a range of experiments that this approach enables a meaningful information plane analysis for the large class of dropout neural networks that is widely used in practice.
The information-theoretic framework promises to explain the predictive power of neural networks. In particular, the information plane analysis, which measures mutual information (MI) between input and representation as well as representation and output, should give rich insights into the training process. This approach, however, was shown to strongly depend on the choice of estimator of the MI especially for deterministic networks, since in these the MI between input and representation is often infinite. Thus, the estimated values depend strongly on the different approaches for estimation, but do not adequately present the training process from an information-theoretic perspective. In this work, we show that dropout with continuously distributed multiplicative noise ensures that MI is finite. We demonstrate in a range of experiments that this subsequently enables a meaningful information plane analysis for a class of dropout neural networks that is widely used in practice.
[ { "text": "The information theoretic framework promises to explain the predictive power of neural networks." }, { "text": "In particular, the information plane analysis, which measures mutual information (MI) between input and representation as well as representation and output, should give rich insights into the training process." }, { "text": "This approach, however, was shown to strongly depend on the choice of estimator of theMI: measuring discrete MI does not capture the nature of deterministic neural networks and continuous data distributions, and different approaches for discretization arbitrarily change results. On the other hand, measuring continuous MI for a deterministic network is not mathematically meaningful." }, { "text": "In this work we show how the stochasticity induced by dropout layers can be utilized to estimate MI in a theoretically sound manner." }, { "text": "We demonstrate in a range of experiments that this approach enables a meaningful information plane analysis for the large class of dropout neural networks that is widely used in practice." } ]
[ { "text": "The information-theoretic framework promises to explain the predictive power of neural networks." }, { "text": "In particular, the information plane analysis, which measures mutual information (MI) between input and representation as well as representation and output, should give rich insights into the training process." }, { "text": "This approach, however, was shown to strongly depend on the choice of estimator of the MI especially for deterministic networks, since in these the MI between input and representation is often infinite. Thus, the estimated values depend strongly on the different approaches for estimation, but do not adequately present the training process from an information-theoretic perspective." }, { "text": "In this work, we show that dropout with continuously distributed multiplicative noise ensures that MI is finite." }, { "text": "We demonstrate in a range of experiments that this subsequently enables a meaningful information plane analysis for a class of dropout neural networks that is widely used in practice." } ]
2woIg-xaV
odX6k0T5Kv
1
2woIg-xaV.odX6k0T5Kv.01
The caveats of different approaches with respect to measuring MI were discussed widely in the literature (Saxe et al., 2019; Kolchinsky et al., 2019). In the following we shortly summarize the problems motivating the approach proposed in this paper. It should be noted that these problems do not appear for the MI measured between representations and targets, since they are not connected via a deterministic function. We therefore concentrate on I ( X ; Z ) , where we assume that Z does not contain additional stochasticity, i.e., we describe a deterministic neural network with Z = f ( X ) .
The caveats of different approaches with respect to measuring MI were discussed widely in the literature (Saxe et al., 2019; Geiger, 2021). These caveats do not appear for the MI measured between representations Z and targets Y , since the target is in most cases a discrete (class) RV, for which MI is always finite. (But see the work of Kolchinsky et al. for problems with the information bottleneck hypothesis in this setting. We therefore concentrate on I ( X ; Z ) , considering a deterministic neural network with Z = f ( X ) .
[ { "text": "The caveats of different approaches with respect to measuring MI were discussed widely in the literature (Saxe et al., 2019; Kolchinsky et al., 2019)." }, { "text": "In the following we shortly summarize the problems motivating the approach proposed in this paper." }, { "text": "It should be noted that these problems do not appear for the MI measured between representations and targets, since they are not connected via a deterministic function." }, { "text": "" }, { "text": " We therefore concentrate on I ( X ; Z ) , where we assume that Z does not contain additional stochasticity, i.e., we describe a deterministic neural network with Z = f ( X ) ." } ]
[ { "text": "The caveats of different approaches with respect to measuring MI were discussed widely in the literature (Saxe et al., 2019; Geiger, 2021)." }, { "text": "" }, { "text": "These caveats do not appear for the MI measured between representations Z and targets Y , since the target is in most cases a discrete (class) RV, for which MI is always finite." }, { "text": "(But see the work of Kolchinsky et al." }, { "text": "for problems with the information bottleneck hypothesis in this setting. We therefore concentrate on I ( X ; Z ) , considering a deterministic neural network with Z = f ( X ) ." } ]
2woIg-xaV
odX6k0T5Kv
2
2woIg-xaV.odX6k0T5Kv.02
The first option is to assume the input to be drawn from a discrete distribution. This view makes it easy to use a finite dataset S at hand to describe the distribution and is supported by the finiteness of the accuracy of the used computational resources (Lorenzen et al., 2021). More precisely,it describes theinput distribution as a uniform on the training data andfixes the discretization corresponding to the computer precision (or other selected bin size for ease of experimental setup). In this case Z is discrete as well and the MI between X and Z is computed like I ( X ; Z ) = H ( Z ) − H ( Z | X ) = H ( Z ) − H ( f ( X ) | X ) = H ( Z ) − = H ( Z ) , where we assume that the network forward function f ( · ) is deterministic. Thus, estimated MI between input and representation essentially corresponds to the entropy of the representation, which is equal to the entropy of the dataset log | S | unless the forward pass maps some of the different data points from the dataset to the same value in the latent space.
One option for estimating this MI is to assume the input to be drawn from a discrete distribution. This view makes it easy to use a finite dataset S to describe the distribution and is supported by the finiteness of the accuracy of the used computational resources (Lorenzen et al., 2021). More precisely, the distribution of X is assumed uniform on the dataset S , and the discretization of Z is performed at a fixed bin size (e.g., corresponding to the computer precision). In this case, Z is discrete as well and the MI between X and Z is computed as I ( X ; Z ) = H ( Z ) − H ( Z | X ) = H ( Z ) − H ( f ( X ) | X ) = H ( Z ) − = H ( Z ) , since the network forward function f ( · ) is deterministic. Thus, the estimated MI between input and representation essentially corresponds to the entropy of the representation, which is equal to the entropy H ( Z ) = H ( X ) = log | S | of the empirical distribution on the dataset, unless f maps some of the different data points from the dataset to the same value in latent space.
[ { "text": "The first option is to assume the input to be drawn from a discrete distribution." }, { "text": "This view makes it easy to use a finite dataset S at hand to describe the distribution and is supported by the finiteness of the accuracy of the used computational resources (Lorenzen et al., 2021)." }, { "text": "More precisely,it describes theinput distribution as a uniform on the training data andfixes the discretization corresponding to the computer precision (or other selected bin size for ease of experimental setup)." }, { "text": "In this case Z is discrete as well and the MI between X and Z is computed like I ( X ; Z ) =" }, { "text": "H ( Z ) − H" }, { "text": "( Z | X ) =" }, { "text": "H ( Z ) − H ( f ( X ) | X ) =" }, { "text": "H ( Z ) − \n" }, { "text": "= H ( Z ) , where we assume that the network forward function f ( · ) is deterministic." }, { "text": "Thus, estimated MI between input and representation essentially corresponds to the entropy of the representation, which is equal to the entropy of the dataset log | S | unless the forward pass maps some of the different data points from the dataset to the same value in the latent space." } ]
[ { "text": "One option for estimating this MI is to assume the input to be drawn from a discrete distribution." }, { "text": "This view makes it easy to use a finite dataset S to describe the distribution and is supported by the finiteness of the accuracy of the used computational resources (Lorenzen et al., 2021)." }, { "text": "More precisely, the distribution of X is assumed uniform on the dataset S , and the discretization of Z is performed at a fixed bin size (e.g., corresponding to the computer precision)." }, { "text": "In this case, Z is discrete as well and the MI between X and Z is computed as I ( X ; Z ) =" }, { "text": "H ( Z ) − H" }, { "text": "( Z | X ) =" }, { "text": "H ( Z ) − H ( f ( X ) | X ) =" }, { "text": "H ( Z ) − \n" }, { "text": "= H ( Z ) , since the network forward function f ( · ) is deterministic." }, { "text": "Thus, the estimated MI between input and representation essentially corresponds to the entropy of the representation, which is equal to the entropy H ( Z ) = H ( X ) = log | S | of the empirical distribution on the dataset, unless f maps some of the different data points from the dataset to the same value in latent space." } ]
2woIg-xaV
odX6k0T5Kv
3
2woIg-xaV.odX6k0T5Kv.03
The second option is to assume X to be drawn from a continuous distribution. This is more aligned to the common description of real world data where we assume that it is drawn from some data generating distribution D over the whole space of X × Y . In this case, if the network transformation f ( · ) results in a discrete distribution of representations Z one can still use the decomposition I ( X, Z ) = H ( Z ) − H ( Z | X ) to estimateand describe the MI based on Shannon entropy. As shown in Theorem 1 of Amjad and Geiger (2019) this is however not the case for neural networks with commonly used activation functions. If f ( · ) is deterministic and Z is not purely discrete, the MI between X and Z is infinite. This happens because the joint distribution is not absolutely continuous with respect to the product of the marginals. Thus, the approach to estimate the MI of continuous input and hidden representation in practice is to modify the space of inputs and/or representations to be discrete, e.g., by binning. For example, binning the representation Z to ˆ Z again yields I ( X ; ˆ Z ) = H ( ˆ Z ) , and the qualitative behavior of this entropy will be defined by properties of activation functions and selected bin size (Saxe et al., 2019).
A different option is to assume X to be drawn from a continuous distribution. This is more aligned to the common description of real-world data and allows us to cover the setting where data is drawn from a data generating distribution p X,Y over the whole space of X × Y . In this case, if the net- work transformation f ( · ) results in a discrete distribution of the representations Z , one can use the decomposition I ( X, Z ) = H ( Z ) − H ( Z | X ) = H ( Z ) to estimate MI based on Shannon entropy, provided that the sample size is sufficiently large (note that the dimension N of Z may be large, and that therefore the estimation of H ( Z ) may suffer from the curse of dimensionality). As shown in Theorem 1 of Amjad and Geiger (2019) this is however not the case for neural networks with commonly used activation functions. If f ( · ) is deterministic and Z is not purely discrete, the MI between X and Z is infinite. By binning, i.e., by quantizing Z to a discrete RV ˆ Z , the MI I ( X ; ˆ Z ) = H ( ˆ Z ) remains finite, but the qualitative behavior of this entropy will be defined by properties of activation functions and selected bin size (Saxe et al., 2019).
[ { "text": "The second option is to assume X to be drawn from a continuous distribution." }, { "text": "This is more aligned to the common description of real world data where we assume that it is drawn from some data generating distribution D over the whole space of X × Y ." }, { "text": "In this case, if the network transformation f ( · ) results in a discrete distribution of representations Z one can still use the decomposition" }, { "text": "I ( X, Z ) =" }, { "text": "H ( Z ) − H" }, { "text": "( Z | X ) to estimateand describe the MI based on Shannon entropy." }, { "text": "As shown in Theorem 1 of Amjad and Geiger (2019) this is however not the case for neural networks with commonly used activation functions." }, { "text": "If f ( · ) is deterministic and Z is not purely discrete, the MI between X and Z is infinite." }, { "text": "This happens because the joint distribution is not absolutely continuous with respect to the product of the marginals." }, { "text": "Thus, the approach to estimate the MI of continuous input and hidden representation in practice is to modify the space of inputs and/or representations to be discrete, e.g., by binning." }, { "text": "For example, binning the representation Z to ˆ Z again yields I ( X ; ˆ Z )" }, { "text": "=" }, { "text": "H ( ˆ Z ) , and the qualitative behavior of this entropy will be defined by properties of activation functions and selected bin size (Saxe et al., 2019)." } ]
[ { "text": "A different option is to assume X to be drawn from a continuous distribution." }, { "text": "This is more aligned to the common description of real-world data and allows us to cover the setting where data is drawn from a data generating distribution p X,Y over the whole space of X × Y ." }, { "text": "In this case, if the net- work transformation f ( · ) results in a discrete distribution of the representations Z , one can use the decomposition" }, { "text": "I ( X, Z ) =" }, { "text": "H ( Z ) − H" }, { "text": "( Z | X ) = H ( Z ) to estimate MI based on Shannon entropy, provided that the sample size is sufficiently large (note that the dimension N of Z may be large, and that therefore the estimation of H ( Z ) may suffer from the curse of dimensionality)." }, { "text": "As shown in Theorem 1 of Amjad and Geiger (2019) this is however not the case for neural networks with commonly used activation functions." }, { "text": "If f ( · ) is deterministic and Z is not purely discrete, the MI between X and Z is infinite." }, { "text": "" }, { "text": "" }, { "text": "By binning, i.e., by quantizing Z to a discrete RV ˆ Z , the MI I ( X ; ˆ Z )" }, { "text": "=" }, { "text": "H ( ˆ Z ) remains finite, but the qualitative behavior of this entropy will be defined by properties of activation functions and selected bin size (Saxe et al., 2019)." } ]
2woIg-xaV
odX6k0T5Kv
4
2woIg-xaV.odX6k0T5Kv.04
Overall, the existing research indicates the need for a method to measure MI for networks with continuous input distributions, since the known approaches to MI estimation lead to an analysis of the development of neural network representations that is limited to geometric interpretation (Geiger, 2021). Based on the knowledge about benefits of the stochasticity added to the training of the neural networks, e.g., in the form of dropout, we suggest to concentrate on the setup where representations are not deterministic, which allows for theoretically justified values of MI. While additive noise is a valuable step towards measuring such MI, it requires noise introduced onall the layers of a network and samples from multiple copies of the network, moreover, it is centered around geometrical meaning of the information compression. In this work, we analyse the stochastic representation obtained via a dropout layer, and show that such noise has a great potential for revealing theunderlying information flow of the neural network.
As discussed above, both the information planes of deterministic neural networks as well as of stochastic neural networks with additive noise at the representations show a geometric picture (and in the former case the geometric picture is the only valid one, since MI is infinite in this case). We therefore, in this work, study the estimation of MI in networks with dropout layers, i.e., in settings where the stochasticity is introduced by multiplicative, rather than additive noise. In what follows we will investigate the requirements on the multiplicative noise for MI to remain finite, and whether the resulting information planes confirm the information bottleneck hypothesis. Finally, we will compare the information planes with quantities reflecting the geometric effects in latent space, effectively determining whether the information-theoretic compression due to multiplicative noise is intrinsically different from geometric compression.
[ { "text": "Overall, the existing research indicates the need for a method to measure MI for networks with continuous input distributions, since the known approaches to MI estimation lead to an analysis of the development of neural network representations that is limited to geometric interpretation (Geiger, 2021). Based on the knowledge about benefits of the stochasticity added to the training of the neural networks, e.g., in the form of dropout, we suggest to concentrate on the setup where representations are not deterministic, which allows for theoretically justified values of MI." }, { "text": "While additive noise is a valuable step towards measuring such MI, it requires noise introduced onall the layers of a network and samples from multiple copies of the network, moreover, it is centered around geometrical meaning of the information compression. In this work, we analyse the stochastic representation obtained via a dropout layer, and show that such noise has a great potential for revealing theunderlying information flow of the neural network." } ]
[ { "text": "As discussed above, both the information planes of deterministic neural networks as well as of stochastic neural networks with additive noise at the representations show a geometric picture (and in the former case the geometric picture is the only valid one, since MI is infinite in this case). We therefore, in this work, study the estimation of MI in networks with dropout layers, i.e., in settings where the stochasticity is introduced by multiplicative, rather than additive noise." }, { "text": "In what follows we will investigate the requirements on the multiplicative noise for MI to remain finite, and whether the resulting information planes confirm the information bottleneck hypothesis. Finally, we will compare the information planes with quantities reflecting the geometric effects in latent space, effectively determining whether the information-theoretic compression due to multiplicative noise is intrinsically different from geometric compression." } ]
2woIg-xaV
odX6k0T5Kv
5
2woIg-xaV.odX6k0T5Kv.05
Instead of approximating MI for regularization, our goal is to compute MI accurately using dropout as a source of stochasticity. For that, we first show a negative result: binary dropout leads to MI being infinite in theory. We then show thatany continuous dropout prevents MIfrom becoming infinite and thus allows for its meaningful estimation.
In this section, we investigate whether neural networks with dropout have indeed finite MI between input X and representation Z . While we first show a negative result by proving that binary dropout still leads to I ( X ; Z ) = ∞ , our Theorem 3.3 shows that dropout with continuous multiplicative noise keeps MI finite. This fact will then allow us to estimate MI for such neural networks in Sections 4 and 5.
[ { "text": "Instead of approximating MI for regularization, our goal is to compute MI accurately using dropout as a source of stochasticity." }, { "text": "For that, we first show a negative result: binary dropout leads to MI being infinite in theory. We then show thatany continuous dropout prevents MIfrom becoming infinite and thus allows for its meaningful estimation." } ]
[ { "text": "In this section, we investigate whether neural networks with dropout have indeed finite MI between input X and representation Z ." }, { "text": "While we first show a negative result by proving that binary dropout still leads to I ( X ; Z ) = ∞ , our Theorem 3.3 shows that dropout with continuous multiplicative noise keeps MI finite. This fact will then allow us to estimate MI for such neural networks in Sections 4 and 5." } ]
2woIg-xaV
odX6k0T5Kv
6
2woIg-xaV.odX6k0T5Kv.06
Thus, binary dropout cannot be used to estimate MIreliably and is not helpful for the information plane analysis. The main obstacle is the finiteness of the combinatorial space induced bythe binary noise on the space of representations.
Thus, for neural networks with binary dropout, any finite estimate of MI is “infinitely wrong”, and the resulting information plane does not permit an information-theoretic interpretation. Essentially, the stochasticity added by binary dropout is combinatorial, and hence cannot mitigate the “continuous” stochasticity available in the input X .
[ { "text": "Thus, binary dropout cannot be used to estimate MIreliably and is not helpful for the information plane analysis." }, { "text": "The main obstacle is the finiteness of the combinatorial space induced bythe binary noise on the space of representations." } ]
[ { "text": "Thus, for neural networks with binary dropout, any finite estimate of MI is “infinitely wrong”, and the resulting information plane does not permit an information-theoretic interpretation." }, { "text": "Essentially, the stochasticity added by binary dropout is combinatorial, and hence cannot mitigate the “continuous” stochasticity available in the input X ." } ]
2woIg-xaV
odX6k0T5Kv
7
2woIg-xaV.odX6k0T5Kv.07
We consider a simple toy problem for validating our approach to computing MI where input X is generated from an n -dimensional standard normal distribution, then modified with a function f ( X ) = 2 X + 0 . 5 and Gaussian dropout distributed according to N (1 , σ 2 ) is applied. We investigate the convergence of our estimator for h ( Z | X ) for increasing number of samples. For each input sample we generate 10 noise masks, thus getting 10 samples of Z . Results are shown in Fig. As it can be seen the estimation stabilizes with larger amount of samples for different dimensionality of the data. We also compare the estimate to the upper bound for h ( Z ) in Fig 3.
We consider a simple toy problem for validating our approach to estimating MI where the input X is generated from an n -dimensional standard normal distribution, modified with a function f ( X ) = 2 X + 0 . 5 , and then subjected to Gaussian dropout distributed according to N (1 , σ 2 ) . We investigate the convergence of our estimator for h ( Z | X ) for increasing number of samples. For each input data point, we generate 10 noise masks, thus obtaining 10 samples of Z for each x ( j ) . The results in Fig.show that the estimation stabilizes with larger amount of samples for different dimensionality of the data. We also compare the estimate to the upper bound for h ( Z ) in Fig 3.
[ { "text": "We consider a simple toy problem for validating our approach to computing MI where input X is generated from an n -dimensional standard normal distribution, then modified with a function f ( X ) =" }, { "text": "2 X + 0 ." }, { "text": "5 and Gaussian dropout distributed according to N (1 , σ 2 ) is applied." }, { "text": "We investigate the convergence of our estimator for h ( Z | X ) for increasing number of samples." }, { "text": "For each input sample we generate 10 noise masks, thus getting 10 samples of Z ." }, { "text": "Results are shown in Fig." }, { "text": "As it can be seen the estimation stabilizes with larger amount of samples for different dimensionality of the data." }, { "text": "We also compare the estimate to the upper bound for h ( Z ) in Fig 3." } ]
[ { "text": "We consider a simple toy problem for validating our approach to estimating MI where the input X is generated from an n -dimensional standard normal distribution, modified with a function f ( X ) =" }, { "text": "2 X + 0 ." }, { "text": "5 , and then subjected to Gaussian dropout distributed according to N (1 , σ 2 ) ." }, { "text": "We investigate the convergence of our estimator for h ( Z | X ) for increasing number of samples." }, { "text": "For each input data point, we generate 10 noise masks, thus obtaining 10 samples of" }, { "text": "Z for each x ( j ) ." }, { "text": "The results in Fig.show that the estimation stabilizes with larger amount of samples for different dimensionality of the data." }, { "text": "We also compare the estimate to the upper bound for h ( Z ) in Fig 3." } ]
2woIg-xaV
odX6k0T5Kv
8
2woIg-xaV.odX6k0T5Kv.08
Moreover, we compare our estimator to the binning approach, the EDGE estimator (Noshad et al., 2019), and the lower bounds analyzed by McAllester and Stratos (2020). The results are shown in Fig. With binning we underestimate MI when bin size is small and overestimate with large bin size (Ross, 2014), which can be clearly seen in the plots where bins are organized both by size and by amount. Moreover, in with the high dimensional data, binning approach hits the maximal possible value of log ( | S | ) ( S being the dataset at hand) very fast, not being able to overestimate the MI value. According to McAllester and Stratos (2020) lower bound based MI estimators also need exponentially (in the true value of MI) many data points for correct value prediction, otherwise they will always heavily underestimate the value. Further computations with different noise level are demonstrated in Appendix A.3.
We finally compare our estimator to binning, the EDGE estimator (Noshad et al., 2019), and the lower bounds analyzed by McAllester and Stratos (2020). In the plot, doe stands for difference-ofentropies (DoE) estimator and doe l stands for DoE with logistic parametrization McAllester and Stratos (2020). The results are shown in Fig. With binning we underestimate MI when bin size is small and overestimate with large bin size (Ross, 2014), which can be clearly seen in the plots where bins are organized both by size and by amount. Moreover, with the high-dimensional data, binning hits the maximal possible value of log( | S | ) very fast, not being able to overestimate the MI value. According to McAllester and Stratos (2020), lower bound-based MI estimators also need exponentially (in the true value of MI) many data points for correct value prediction, otherwise they will always heavily underestimate the value. Further computations with different noise level are demonstrated in Appendix A.3.
[ { "text": "Moreover, we compare our estimator to the binning approach, the EDGE estimator (Noshad et al., 2019), and the lower bounds analyzed by McAllester and Stratos (2020). " }, { "text": "The results are shown in Fig." }, { "text": "With binning we underestimate MI when bin size is small and overestimate with large bin size (Ross, 2014), which can be clearly seen in the plots where bins are organized both by size and by amount." }, { "text": "Moreover, in with the high dimensional data, binning approach hits the maximal possible value of log ( | S | ) ( S being the dataset at hand) very fast, not being able to overestimate the MI value." }, { "text": "According to McAllester and Stratos (2020) lower bound based MI estimators also need exponentially (in the true value of MI) many data points for correct value prediction, otherwise they will always heavily underestimate the value." }, { "text": "Further computations with different noise level are demonstrated in Appendix A.3." } ]
[ { "text": "We finally compare our estimator to binning, the EDGE estimator (Noshad et al., 2019), and the lower bounds analyzed by McAllester and Stratos (2020). In the plot, doe stands for difference-ofentropies (DoE) estimator and doe l stands for DoE with logistic parametrization McAllester and Stratos (2020)." }, { "text": "The results are shown in Fig." }, { "text": "With binning we underestimate MI when bin size is small and overestimate with large bin size (Ross, 2014), which can be clearly seen in the plots where bins are organized both by size and by amount." }, { "text": "Moreover, with the high-dimensional data, binning hits the maximal possible value of log( | S | ) very fast, not being able to overestimate the MI value." }, { "text": "According to McAllester and Stratos (2020), lower bound-based MI estimators also need exponentially (in the true value of MI) many data points for correct value prediction, otherwise they will always heavily underestimate the value." }, { "text": "Further computations with different noise level are demonstrated in Appendix A.3." } ]
2woIg-xaV
odX6k0T5Kv
9
2woIg-xaV.odX6k0T5Kv.09
We use the estimators described in the previous section for an information plane analysis of dropout networks. In the a first set of experiments, we analysed information planes (IPs) for the training of Gaussian and information dropout networks.
We use the estimators described in the previous section for an information plane (IP) analysis of networks with Gaussian and information dropout. We always consider only the representation corresponding to the first dropout layer 3 and measure the MI in nats, e.g. use the natural logarithm.
[ { "text": "We use the estimators described in the previous section for an information plane analysis of dropout networks. In the a first set of experiments, we analysed information planes (IPs) for the training of Gaussian and information dropout networks." } ]
[ { "text": "We use the estimators described in the previous section for an information plane (IP) analysis of networks with Gaussian and information dropout. We always consider only the representation corresponding to the first dropout layer 3 and measure the MI in nats, e.g. use the natural logarithm." } ]
2woIg-xaV
odX6k0T5Kv
10
2woIg-xaV.odX6k0T5Kv.10
This indicates that the MI compression measured in dropout networks is different from purely geometrical compression, which would make it fundamentally different to all the discrete MI estimators and the additive noise approach (Basirat et al., 2021).
This indicates that either the MI compression measured in dropout networks is different from purely geometrical compression, or that the number of samples | S | is insufficient to reliably estimate I ( X ; ˆ Z ) by binning. While both options are interesting in their own regard and exhibit the limitations of binning-based estimators in the study of IPs, we tend to arguing for the former, since the binningbased estimates of I ( X ; Z ) are much lower than log | S | in Figs. 1 and 5.
[ { "text": "This indicates that the MI compression measured in dropout networks is different from purely geometrical compression, which would make it fundamentally different to all the discrete MI estimators and the additive noise approach (Basirat et al., 2021)." } ]
[ { "text": "This indicates that either the MI compression measured in dropout networks is different from purely geometrical compression, or that the number of samples | S | is insufficient to reliably estimate I ( X ; ˆ Z ) by binning. While both options are interesting in their own regard and exhibit the limitations of binning-based estimators in the study of IPs, we tend to arguing for the former, since the binningbased estimates of I ( X ; Z ) are much lower than log | S | in Figs. 1 and 5." } ]
2woIg-xaV
odX6k0T5Kv
11
2woIg-xaV.odX6k0T5Kv.11
KL-divergence) term the level of achieved MI with the label is higher and the resulting error is 5% lower for the test set and 10% for the train set. Thus, the larger the value of β the smaller is the amount of information between input and representation throughout the training, which leads to the higher error (both on training and test set). We can see larger information compression with smaller β and almost no compression with the larger β . We conjecture that information can only be compressed if enough of it is allowed to flow through the network. We repeated experiments with the same convolutional network architecture where we reduced the number of filters in the hidden layers. Fig. 7 (a) and (b) show IPs for the original fullCNN and (c), (d) show the IPs for the network that contains 25% of the filters on each of the layers. This indicates that the ability to compress MI between input and representation can be connected with the overall capacity of a neural network.
I ( X ; Z ) is effective (i.e., larger values of β lead to smaller I ( X ; Z ) ), and that regularizing too strongly ( β = 20 ) leads to worse performance (the error is 5% higher for the test set and 10% for the train set) We can further see stronger compression for smaller β and almost no compression for larger β . We conjecture that compressed can only become visible if sufficient information is permitted to flow through the network (which happens only for small β ). We repeated experiments with the same convolutional network architecture where we reduced the number of filters in the hidden layers. Fig. 7 (c) and (d) show the IPs for a fullCNN that contains 25% of the filters of the original fullCNN on each of the layers. There it can be seen that the smaller network appears not to compress at all, but that I ( X ; Z ) rather increases throughout training until it is at the same level as in Fig.(a). This indicates that β determines to which point in the information plane training converges, and that trajectory that is traversed during training depends on architectural properties and/or the overall capacity of the neural network.
[ { "text": "KL-divergence) term the level of achieved MI with the label is higher and the resulting error is 5% lower for the test set and 10% for the train set. Thus, the larger the value of β the smaller is the amount of information between input and representation throughout the training, which leads to the higher error (both on training and test set)." }, { "text": "We can see larger information compression with smaller β and almost no compression with the larger β ." }, { "text": "We conjecture that information can only be compressed if enough of it is allowed to flow through the network." }, { "text": "We repeated experiments with the same convolutional network architecture where we reduced the number of filters in the hidden layers." }, { "text": "Fig." }, { "text": "7 (a) and (b) show IPs for the original fullCNN and (c), (d) show the IPs for the network that contains 25% of the filters on each of the layers." }, { "text": "" }, { "text": "This indicates that the ability to compress MI between input and representation can be connected with the overall capacity of a neural network." } ]
[ { "text": "I ( X ; Z ) is effective (i.e., larger values of β lead to smaller I ( X ; Z ) ), and that regularizing too strongly ( β = 20 ) leads to worse performance (the error is 5% higher for the test set and 10% for the train set)" }, { "text": "We can further see stronger compression for smaller β and almost no compression for larger β ." }, { "text": "We conjecture that compressed can only become visible if sufficient information is permitted to flow through the network (which happens only for small β )." }, { "text": "We repeated experiments with the same convolutional network architecture where we reduced the number of filters in the hidden layers." }, { "text": "Fig." }, { "text": "7 (c) and (d) show the IPs for a fullCNN that contains 25% of the filters of the original fullCNN on each of the layers." }, { "text": "There it can be seen that the smaller network appears not to compress at all, but that I ( X ; Z ) rather increases throughout training until it is at the same level as in Fig.(a)." }, { "text": "This indicates that β determines to which point in the information plane training converges, and that trajectory that is traversed during training depends on architectural properties and/or the overall capacity of the neural network." } ]
0UOqYDbGj
OjFE1u7VK_
0
0UOqYDbGj.OjFE1u7VK_.00
MoE-NPs application to different meta learning tasks. Detailed computational diagrams in trainingand testingare attached in Appendix (C). For the sake of simplicity, we derive equations w.r.t. a task τ in the following section, but a batch of tasks are considered in training in implementation.
MoE-NPs application to different meta learning tasks. We have attached concepts of variational priors and posteriors and detailed computational diagrams in training and testing in Appendix (C). For the sake of simplicity, we derive equations w.r.t. a task τ in the following section, but a batch of tasks are considered in training in implementation.
[ { "text": "MoE-NPs application to different meta learning tasks." }, { "text": "Detailed computational diagrams in trainingand testingare attached in Appendix (C)." }, { "text": "For the sake of simplicity, we derive equations w.r.t." }, { "text": "a task τ in the following section, but a batch of tasks are considered in training in implementation." } ]
[ { "text": "MoE-NPs application to different meta learning tasks." }, { "text": "We have attached concepts of variational priors and posteriors and detailed computational diagrams in training and testing in Appendix (C)." }, { "text": "For the sake of simplicity, we derive equations w.r.t." }, { "text": "a task τ in the following section, but a batch of tasks are considered in training in implementation." } ]
kh9nHZ00_Q
HARjfYbDKg
0
kh9nHZ00_Q.HARjfYbDKg.00
M-IDPG-PHM performs 2.5pt better than the traditional prompt tuning method and 0.5pt better than the multi-layer P-Tuning v2 method. This improvement illustrates that our method has better generalization in few-shot settings. When K becomes larger, IDPG-PHM still maintains good results with 1.9pt, 0.2pt ( K =500) and 2.0pt, 0.2pt ( K =1000) better accuracy than traditional prompt tuning, P-tuning v2, respectively. We also observe that when K is small, our method sometimes has a high variance (e.g., 4.6 on MPQA when K = 100). We suspect that this may be due to bad initialization that leads the model to non-optimal parameters.
M-IDPG-PHM performs 2.5pt better than the traditional prompt tuning method and 0.5pt better than the multi-layer P-Tuning v2 method. This improvement illustrates that our method has better generalization in few-shot settings. When K becomes larger, IDPG-PHM still maintains good results with 1.9pt and 0.2pt improvement ( K =500); and 2.0pt and 0.2pt improvement ( K =1000) in accuracy with traditional prompt tuning and P-tuning v2 approaches, respectively. We also observe that sometimes when K is small, our method results have high variance (e.g., 4.6 on MPQA, when K = 100). We suspect that this may be due to poor initialization leading the model to non-optimal parameters.
[ { "text": "M-IDPG-PHM performs 2.5pt better than the traditional prompt tuning method and 0.5pt better than the multi-layer P-Tuning v2 method." }, { "text": "This improvement illustrates that our method has better generalization in few-shot settings." }, { "text": "When K becomes larger, IDPG-PHM still maintains good results with 1.9pt, 0.2pt ( K =500) and 2.0pt, 0.2pt ( K =1000) better accuracy than traditional prompt tuning, P-tuning v2, respectively." }, { "text": "We also observe that when K is small, our method sometimes has a high variance (e.g., 4.6 on MPQA" }, { "text": "when K = 100)." }, { "text": "We suspect that this may be due to bad initialization that leads the model to non-optimal parameters." } ]
[ { "text": "M-IDPG-PHM performs 2.5pt better than the traditional prompt tuning method and 0.5pt better than the multi-layer P-Tuning v2 method." }, { "text": "This improvement illustrates that our method has better generalization in few-shot settings." }, { "text": "When K becomes larger, IDPG-PHM still maintains good results with 1.9pt and 0.2pt improvement ( K =500); and 2.0pt and 0.2pt improvement ( K =1000) in accuracy with traditional prompt tuning and P-tuning v2 approaches, respectively." }, { "text": "We also observe that sometimes when K is small, our method results have high variance (e.g., 4.6 on MPQA, when" }, { "text": "K = 100)." }, { "text": "We suspect that this may be due to poor initialization leading the model to non-optimal parameters." } ]
SykOf7WCZ
Skbnpm6XM
0
SykOf7WCZ.Skbnpm6XM.00
Figure 3 shows the results. Training with the vanilla GAN and RegGAN leads to generators that place too much mass on some modes and ignore others, leading to a large Total Variation distance between the generated label distribution and the correct one. EBGAN and WGAN perform slightly worse than
Figure 3 shows the results. Training with the vanilla GAN, RegGAN or GMMN leads to generators that place too much mass on some modes and ignore others, resulting in larger TV distances between the generated label distribution and the uniform one. EBGAN and WGAN perform slightly worse than
[ { "text": "Figure 3 shows the results." }, { "text": "Training with the vanilla GAN and RegGAN leads to generators that place too much mass on some modes and ignore others, leading to a large Total Variation distance between the generated label distribution and the correct one." }, { "text": "EBGAN and WGAN perform slightly worse than" } ]
[ { "text": "Figure 3 shows the results." }, { "text": "Training with the vanilla GAN, RegGAN or GMMN leads to generators that place too much mass on some modes and ignore others, resulting in larger TV distances between the generated label distribution and the uniform one." }, { "text": "EBGAN and WGAN perform slightly worse than" } ]
BBm1BPLM13
F-hEZXMWk
0
BBm1BPLM13.F-hEZXMWk.00
Recent advances in adversarial machine learning have shown that defenses considered to be robust are actually susceptible to adversarial attacks which are specifically tailored to target their weaknesses. These defenses include Barrage of Random Transforms (BaRT), Friendly Adversarial Training (FAT), Trash is Treasure (TiT) and ensemble models made up of Vision Transformers (ViTs), Big Transfer models and Spiking Neural Networks (SNNs). It remains an open question, however, as to whether the adversarial examples designed to target one defense will be similarly misclassified by another defense. In this paper, we provide the first adversarial defense transferability study as well as a game theoretic framework for ensemble adversarial attacks and defenses. Our framework is called Game theoretic Mixed Experts (GaME) and is designed to find the Mixed-Nash strategy for an attacker that can employ compositional adversarial attacks. We show that this framework creates an ensemble of defenses with greater robustness than a combinational defense with a uniform or random probability distribution. Overall, our framework and analyses advance the field of adversarial machine learning by yielding new insights into compositional attack and defense formulations.
Recent advances in adversarial machine learning have shown that defenses considered to be robust are actually susceptible to adversarial attacks which are specifically tailored to target their weaknesses. These defenses include Barrage of Random Transforms (BaRT), Friendly Adversarial Training (FAT), Trash is Treasure (TiT) and ensemble models made up of Vision Transformers (ViTs) Big Transfer models and Spiking Neural Networks (SNNs). A natural question arises: how can one best leverage a combination of adversarial defenses to thwart such attacks? In this paper, we provide a game-theoretic framework for ensemble adversarial attacks and defenses which answers this question. In addition to our framework we produce the first adversarial defense transferability study to further motivate a need for combinational defenses utilizing a diverse set of defense architectures Our framework is called Game theoretic Mixed Experts (GaME) and is designed to find the Mixed-Nash strategy for a defender when facing an attacker employing compositional adversarial attacks. We show that this framework creates an ensemble of defenses with greater robustness than multiple state-of-the-art, singlemodel defenses in addition to combinational defenses with uniform probability distributions. Overall, our framework and analyses advance the field of adversarial machine learning by yielding new insights into compositional attack and defense formulations.
[ { "text": "Recent advances in adversarial machine learning have shown that defenses considered to be robust are actually susceptible to adversarial attacks which are specifically tailored to target their weaknesses." }, { "text": "These defenses include Barrage of Random Transforms (BaRT), Friendly Adversarial Training (FAT), Trash is Treasure (TiT) and ensemble models made up of Vision Transformers (ViTs), Big Transfer models and Spiking Neural Networks (SNNs)." }, { "text": "It remains an open question, however, as to whether the adversarial examples designed to target one defense will be similarly misclassified by another defense." }, { "text": "In this paper, we provide the first adversarial defense transferability study as well as a game theoretic framework for ensemble adversarial attacks and defenses." }, { "text": "Our framework is called Game theoretic Mixed Experts (GaME) and is designed to find the Mixed-Nash strategy for an attacker that can employ compositional adversarial attacks." }, { "text": "We show that this framework creates an ensemble of defenses with greater robustness than a combinational defense with a uniform or random probability distribution." }, { "text": "Overall, our framework and analyses advance the field of adversarial machine learning by yielding new insights into compositional attack and defense formulations." } ]
[ { "text": "Recent advances in adversarial machine learning have shown that defenses considered to be robust are actually susceptible to adversarial attacks which are specifically tailored to target their weaknesses." }, { "text": "These defenses include Barrage of Random Transforms (BaRT), Friendly Adversarial Training (FAT), Trash is Treasure (TiT) and ensemble models made up of Vision Transformers (ViTs) Big Transfer models and Spiking Neural Networks (SNNs)." }, { "text": "A natural question arises: how can one best leverage a combination of adversarial defenses to thwart such attacks?" }, { "text": "In this paper, we provide a game-theoretic framework for ensemble adversarial attacks and defenses which answers this question. In addition to our framework we produce the first adversarial defense transferability study to further motivate a need for combinational defenses utilizing a diverse set of defense architectures" }, { "text": "Our framework is called Game theoretic Mixed Experts (GaME) and is designed to find the Mixed-Nash strategy for a defender when facing an attacker employing compositional adversarial attacks." }, { "text": "We show that this framework creates an ensemble of defenses with greater robustness than multiple state-of-the-art, singlemodel defenses in addition to combinational defenses with uniform probability distributions." }, { "text": "Overall, our framework and analyses advance the field of adversarial machine learning by yielding new insights into compositional attack and defense formulations." } ]
BBm1BPLM13
F-hEZXMWk
1
BBm1BPLM13.F-hEZXMWk.01
These are precisely the questions our paper seeks to answer. We break from the traditional dynamic of adversarial machine learning which focuses on the single best attack and defense. We instead take a multi-faceted approach and develop a game theoretic framework to answer the above questions. Specifically, we provide the following contributions: On the attack side we develop two new white-box attacks called the Momentum Iterative Method over Expectation (MIME) and the Auto Expectation Self-Attention Gradient Attack (AE-SAGA). These attacks are necessary for targeting certain randomized defenses and for adapting to multi-defense strategies. We analyze the adversarial transferability of current defenses like Trash is Treasure Xiao & Zheng (2020), Barrage of Random Transforms Raff et al. Friendly Adversarial Training Zhang et al. (2020) and other new architectures like SNNs Rathi & Roy (2021b); Fang et al. and ViTs Dosovitskiy et al. Lastly, and most importantly, we formulate a practical, game-theoretic framework for finding the optimal strategies foran attacker and defender who each employ a set of state-of-the-art adversarial attacks and defenses.
These are precisely the questions our paper seeks to answer. We break from the traditional dynamic of adversarial machine learning which focuses on the single best attack and defense. We instead take a multi-faceted approach and develop a game theoretic framework to answer the above questions. Specifically, we provide the following contributions: Most importantly, we formulate a practical, game-theoretic framework for finding the optimal strategies for an attacker and defender who each employ a set of state-of-the-art adversarial attacks and defenses. Motivated by this framework, we develop two new white-box attacks called the Momentum Iterative Method over Expectation (MIME) and the Auto Expectation Self-Attention Gradient Attack (AE-SAGA) in order to create a stronger adversary. These attacks are necessary for targeting certain randomized defenses and for adapting to multi-defense strategies. Lastly, we analyze the adversarial transferability of current defenses like Trash is Treasure Xiao & Zheng (2020), Barrage of Random Transforms Raff et al. Friendly Adversarial Training Zhang et al. (2020) and other new architectures like SNNs Rathi & Roy (2021b); Fang et al. and ViTs Dosovitskiy et al. We further leverage the low transferability between these classifiers to find those which are best suited for a combined, ensemble defense such as the one developed in our game-theoretic framework.
[ { "text": "These are precisely the questions our paper seeks to answer." }, { "text": "We break from the traditional dynamic of adversarial machine learning which focuses on the single best attack and defense." }, { "text": "We instead take a multi-faceted approach and develop a game theoretic framework to answer the above questions." }, { "text": "Specifically, we provide the following contributions: " }, { "text": "On the attack side we develop two new white-box attacks called the Momentum Iterative Method over Expectation (MIME) and the Auto Expectation Self-Attention Gradient Attack (AE-SAGA)." }, { "text": "These attacks are necessary for targeting certain randomized defenses and for adapting to multi-defense strategies." }, { "text": "We analyze the adversarial transferability of current defenses like Trash is Treasure Xiao & Zheng (2020), Barrage of Random Transforms Raff et al." }, { "text": "Friendly Adversarial Training Zhang et al." }, { "text": "(2020) and other new architectures like SNNs Rathi & Roy (2021b); Fang et al." }, { "text": "and ViTs Dosovitskiy et al." }, { "text": "Lastly, and most importantly, we formulate a practical, game-theoretic framework for finding the optimal strategies foran attacker and defender who each employ a set of state-of-the-art adversarial attacks and defenses." } ]
[ { "text": "These are precisely the questions our paper seeks to answer." }, { "text": "We break from the traditional dynamic of adversarial machine learning which focuses on the single best attack and defense." }, { "text": "We instead take a multi-faceted approach and develop a game theoretic framework to answer the above questions." }, { "text": "Specifically, we provide the following contributions: Most importantly, we formulate a practical, game-theoretic framework for finding the optimal strategies for an attacker and defender who each employ a set of state-of-the-art adversarial attacks and defenses." }, { "text": "Motivated by this framework, we develop two new white-box attacks called the Momentum Iterative Method over Expectation (MIME) and the Auto Expectation Self-Attention Gradient Attack (AE-SAGA) in order to create a stronger adversary." }, { "text": "These attacks are necessary for targeting certain randomized defenses and for adapting to multi-defense strategies." }, { "text": "Lastly, we analyze the adversarial transferability of current defenses like Trash is Treasure Xiao & Zheng (2020), Barrage of Random Transforms Raff et al." }, { "text": "Friendly Adversarial Training Zhang et al." }, { "text": "(2020) and other new architectures like SNNs Rathi & Roy (2021b); Fang et al." }, { "text": "and ViTs Dosovitskiy et al." }, { "text": "We further leverage the low transferability between these classifiers to find those which are best suited for a combined, ensemble defense such as the one developed in our game-theoretic framework." } ]
BBm1BPLM13
F-hEZXMWk
2
BBm1BPLM13.F-hEZXMWk.02
In this section we derive our framework, Game theoretic Mixed Experts (GaME), for playing the adversarial examples game. In comparison to other works Meunier et al. Pinot et al. Pal & Vidal (2020) we take a more discretized approach to playing and solving the adversarial examples game, which ultimately leads to the creation of a finite, tabular, zero-sum game that can be solved using linear programming techniques. In particular, we leverage the state-of-the-art attacks and defenses established in Sections 3 and 4 as “experts” in the adversarial examples game.
In this section we derive our framework, Game theoretic Mixed Experts (GaME), for approximating a Nash equilibrium in the adversarial examples game. In comparison to other works Meunier et al. Pinot et al. Pal & Vidal (2020) Balcan et al. Le et al. we take a more discretized approach and solve the approximate version of the adversarial examples game. This ultimately leads to the creation of a finite, tabular, zero-sum game that can be solved in polynomial time using linear programming techniques. In relation to our work, a similar adversarial game framework was proposed in Sengupta et al. but did not include comprehensive defender and attacker threat models. Specifically, we develop our framework under an adversary that can employ state-of-the-art single model and multi-model attacks and a defender that can utilize both randomization and voting schemes.
[ { "text": "In this section we derive our framework, Game theoretic Mixed Experts (GaME), for playing the adversarial examples game." }, { "text": "In comparison to other works Meunier et al." }, { "text": "Pinot et al." }, { "text": "" }, { "text": "" }, { "text": "Pal & Vidal (2020) we take a more discretized approach to playing and solving the adversarial examples game, which ultimately leads to the creation of a finite, tabular, zero-sum game that can be solved using linear programming techniques." }, { "text": "In particular, we leverage the state-of-the-art attacks and defenses established in Sections 3 and 4 as “experts” in the adversarial examples game." } ]
[ { "text": "In this section we derive our framework, Game theoretic Mixed Experts (GaME), for approximating a Nash equilibrium in the adversarial examples game." }, { "text": "In comparison to other works Meunier et al." }, { "text": "Pinot et al." }, { "text": "Pal & Vidal (2020)" }, { "text": "Balcan et al." }, { "text": "Le et al. we take a more discretized approach and solve the approximate version of the adversarial examples game. This ultimately leads to the creation of a finite, tabular, zero-sum game that can be solved in polynomial time using linear programming techniques." }, { "text": "In relation to our work, a similar adversarial game framework was proposed in Sengupta et al. but did not include comprehensive defender and attacker threat models. Specifically, we develop our framework under an adversary that can employ state-of-the-art single model and multi-model attacks and a defender that can utilize both randomization and voting schemes." } ]
BBm1BPLM13
F-hEZXMWk
3
BBm1BPLM13.F-hEZXMWk.03
This optimization problem is a linear program, the explicit form of which we provide in the supplemental material. All linear programs have a dual problem, in this case the dual problem finds a mixed Nash strategy for p a . This can be done by changing the problem to a minimization problem and trasposing R . In the interest of space we give the explicit form of the dual problem in the supplemental material. These linear programs can be solved usingweakly polynomial time algorithms like the interior point method Karmarkar (1984), or those developed in Vaidya (1989).
This optimization problem is a linear program, the explicit form of which we provide in the supplemental material. All linear programs have a dual problem, in this case the dual problem finds a mixed Nash strategy for p A . This can be done by changing the problem to a minimization problem and transposing R . In the interest of space we give the explicit form of the dual problem in the supplemental material as well. These linear programs can be solved using polynomial time algorithms.
[ { "text": "This optimization problem is a linear program, the explicit form of which we provide in the supplemental material." }, { "text": "All linear programs have a dual problem, in this case the dual problem finds a mixed Nash strategy for p a ." }, { "text": "This can be done by changing the problem to a minimization problem and trasposing R ." }, { "text": "In the interest of space we give the explicit form of the dual problem in the supplemental material." }, { "text": "These linear programs can be solved usingweakly polynomial time algorithms like the interior point method Karmarkar (1984), or those developed in Vaidya (1989)." } ]
[ { "text": "This optimization problem is a linear program, the explicit form of which we provide in the supplemental material." }, { "text": "All linear programs have a dual problem, in this case the dual problem finds a mixed Nash strategy for p A ." }, { "text": "This can be done by changing the problem to a minimization problem and transposing R ." }, { "text": "In the interest of space we give the explicit form of the dual problem in the supplemental material as well." }, { "text": "These linear programs can be solved using polynomial time algorithms." } ]
BBm1BPLM13
F-hEZXMWk
4
BBm1BPLM13.F-hEZXMWk.04
For our experimental results, we test on two datasets, CIFAR-10 Krizhevsky et al. and TinyImageNet Le & Yang (2015). For CIFAR-10 we solved an instance of GaME 1 using the following defenses: BaRT-1 (B1), BaRT-5 (B5), ResNet-164-FAT (RF), ViT-L 16-FAT (VF), SNN Transfer (ST), Backprop SNN (SB), and TiT using ViT and BiT (BVT). Table 3 shows the Mixed Nash Strategy found for the defender in this instance of GaME 1 . For CIFAR-10 instance r ∗ = .
For our experimental results, we test on two datasets, CIFAR-10 Krizhevsky et al. and TinyImageNet Le & Yang (2015). For CIFAR-10 we solved instances of GaME n using the following defenses: BaRT-1 (B1), BaRT-5 (B5), ResNet-164-FAT (RF), ViT-L-16-FAT (VF), SNN Transfer (ST), Backprop SNN (SB), and TiT using ViT and BiT (BVT). For Tiny ImageNet we solved instances of GaME n utilizing: BaRT-1, BaRT-5, ViT-L-16-FAT, and TiT using ViT and BiT. Explicit details for our experimental setup are given in the supplementary material.
[ { "text": "For our experimental results, we test on two datasets, CIFAR-10 Krizhevsky et al." }, { "text": "and TinyImageNet Le & Yang (2015)." }, { "text": "For CIFAR-10 we solved an instance of GaME 1 using the following defenses: BaRT-1 (B1), BaRT-5 (B5), ResNet-164-FAT (RF), ViT-L 16-FAT (VF), SNN Transfer (ST), Backprop SNN (SB), and TiT using ViT and BiT (BVT)." }, { "text": "Table 3 shows the Mixed Nash Strategy found for the defender in this instance of GaME 1 . For CIFAR-10 instance r ∗ = ." } ]
[ { "text": "For our experimental results, we test on two datasets, CIFAR-10 Krizhevsky et al." }, { "text": "and TinyImageNet Le & Yang (2015)." }, { "text": "For CIFAR-10 we solved instances of GaME n using the following defenses: BaRT-1 (B1), BaRT-5 (B5), ResNet-164-FAT (RF), ViT-L-16-FAT (VF), SNN Transfer (ST), Backprop SNN (SB), and TiT using ViT and BiT (BVT)." }, { "text": "For Tiny ImageNet we solved instances of GaME n utilizing: BaRT-1, BaRT-5, ViT-L-16-FAT, and TiT using ViT and BiT. Explicit details for our experimental setup are given in the supplementary material." } ]
k7XdRlHtjR
QyunsqSUh6
0
k7XdRlHtjR.QyunsqSUh6.00
Self-supervised pre -training in vision. Learning the representation of images from unlabeled data is an increasingly popular direction in computer vision. Mainstream approaches can be roughly categorized into two classes. One class is the contrastive learning approach which maximizes agreement between differently augmented views of an image via a contrastive loss (Chen et al., 2020; He et al., 2020). The other class is the generative learning approach, which randomly masks patches in an image and learns to generate the original one (Bao et al., 2021; He et al., 2022). Recently, there have been attempts to use pre-trained models to achieve certified robustness. The most relevant works are Salman et al. ; Carlini et al. Both works first leverage a pre-trained denoiser to purify the input, and then use a standard classifier to make predictions. We discuss these two works and ours in depth in Sec. 3.
Self-supervised pre -training in vision. Learning the representation of images from unlabeled data is an increasingly popular direction in computer vision. Mainstream approaches can be roughly categorized into two classes. One class is the contrastive learning approach which maximizes agreement between differently augmented views of an image via a contrastive loss (Chen et al., 2020; He et al., 2020). The other class is the generative learning approach, which randomly masks patches in an image and learns to generate the original one (Bao et al., 2021; He et al., 2022). Several works utilized self-supervised pre-training to improve image denoising (Joshua Batson, 2019; Yaochen Xie, 2020), and recently there have been attempts to use pre-trained denoisers to achieve certified robustness. The most relevant works are Salman et al. ; Carlini et al. Both works first leverage a pre-trained denoiser to purify the input, and then use a standard classifier to make predictions. We discuss these two works and ours in depth in Sec. 3.
[ { "text": "Self-supervised pre" }, { "text": "-training in vision." }, { "text": "Learning the representation of images from unlabeled data is an increasingly popular direction in computer vision." }, { "text": "Mainstream approaches can be roughly categorized into two classes." }, { "text": "One class is the contrastive learning approach which maximizes agreement between differently augmented views of an image via a contrastive loss (Chen et al., 2020; He et al., 2020)." }, { "text": "The other class is the generative learning approach, which randomly masks patches in an image and learns to generate the original one (Bao et al., 2021;" }, { "text": "He et al., 2022)." }, { "text": "Recently, there have been attempts to use pre-trained models to achieve certified robustness." }, { "text": "The most relevant works are Salman et al." }, { "text": "; Carlini et al." }, { "text": "Both works first leverage a pre-trained denoiser to purify the input, and then use a standard classifier to make predictions." }, { "text": "We discuss these two works and ours in depth in Sec. 3." } ]
[ { "text": "Self-supervised pre" }, { "text": "-training in vision." }, { "text": "Learning the representation of images from unlabeled data is an increasingly popular direction in computer vision." }, { "text": "Mainstream approaches can be roughly categorized into two classes." }, { "text": "One class is the contrastive learning approach which maximizes agreement between differently augmented views of an image via a contrastive loss (Chen et al., 2020; He et al., 2020)." }, { "text": "The other class is the generative learning approach, which randomly masks patches in an image and learns to generate the original one (Bao et al., 2021;" }, { "text": "He et al., 2022)." }, { "text": "Several works utilized self-supervised pre-training to improve image denoising (Joshua Batson, 2019; Yaochen Xie, 2020), and recently there have been attempts to use pre-trained denoisers to achieve certified robustness." }, { "text": "The most relevant works are Salman et al." }, { "text": "; Carlini et al." }, { "text": "Both works first leverage a pre-trained denoiser to purify the input, and then use a standard classifier to make predictions." }, { "text": "We discuss these two works and ours in depth in Sec. 3." } ]
ByPdC86-z
BJFChVamf
0
ByPdC86-z.BJFChVamf.00
DNA sequence, though the interaction is not yet fully understood. In this paper we develop a convolutional neural network that predicts key determinants of chromatin structure from primary DNA sequence. Our experiments show that the method outperforms several existing methods both in terms of prediction accuracy and training time.
DNA sequence, though the interaction is not yet fully understood. In this paper we develop a convolutional neural network that takes an image-representation of primary DNA sequence as its input, and predicts key determinants of chromatin structure. The method is developed such that it is capable of detecting interactions between distal elements in the DNA sequence, which are known to be highly relevant. Our experiments show that the method outperforms several existing methods both in terms of prediction accuracy and training time.
[ { "text": "DNA sequence, though the interaction is not yet fully understood." }, { "text": "In this paper we develop a convolutional neural network that predicts key determinants of chromatin structure from primary DNA sequence." }, { "text": "" }, { "text": "Our experiments show that the method outperforms several existing methods both in terms of prediction accuracy and training time." } ]
[ { "text": "DNA sequence, though the interaction is not yet fully understood." }, { "text": "In this paper we develop a convolutional neural network that takes an image-representation of primary DNA sequence as its input, and predicts key determinants of chromatin structure." }, { "text": "The method is developed such that it is capable of detecting interactions between distal elements in the DNA sequence, which are known to be highly relevant." }, { "text": "Our experiments show that the method outperforms several existing methods both in terms of prediction accuracy and training time." } ]
ByPdC86-z
BJFChVamf
1
ByPdC86-z.BJFChVamf.01
Second, the model architecture is different: the network in Nguyen et al. (2016) consists of two convolution layers followed by pooling layers, a fully connected layer and a sigmoid layer, while our model architecture is more complicated (see Methods). Residual connections are applied in our network to encourage the reuse of learned features, and the network is far deeper. The main difference is the format of theinput: we use images as input for the network while their model uses sequences. Apart from Elgin (2012), the only example we are aware of where Hilbert curves were used to map DNA sequence into two-dimensional space is from Anders (2009), who demonstrated the power of Hilbert curves for visualizing DNA. Beyond our theoretical considerations, these last two studies suggest there are practical benefits of mapping DNA using Hilbert curves.
First and foremost, the model architecture is different: the network in Nguyen et al. (2016) consists of two convolution layers followed by pooling layers, a fully connected layer and a sigmoid layer, while our model architecture is deeper, uses residual connections to reuse the learned features, and has no fully connected layers (see Methods). Second, while we use a spacefilling curve to transform the sequence data into an image-like tensor, Nguyen et al. (2016) keep the sequential form of the input data. Apart from Elgin (2012), the only example we are aware of where Hilbert curves were used to map DNA sequence into two-dimensional space is from Anders (2009), who demonstrated the power of Hilbert curves for visualizing DNA. Beyond our theoretical considerations, these last two studies suggest there are practical benefits of mapping DNA using Hilbert curves.
[ { "text": "Second, the model architecture is different: the network in Nguyen et al." }, { "text": "(2016) consists of two convolution layers followed by pooling layers, a fully connected layer and a sigmoid layer, while our model architecture is more complicated (see Methods). Residual connections are applied in our network to encourage the reuse of learned features, and the network is far deeper." }, { "text": "The main difference is the format of theinput: we use images as input for the network while their model uses sequences." }, { "text": "Apart from Elgin (2012), the only example we are aware of where Hilbert curves were used to map DNA sequence into two-dimensional space is from Anders (2009), who demonstrated the power of Hilbert curves for visualizing DNA." }, { "text": "Beyond our theoretical considerations, these last two studies suggest there are practical benefits of mapping DNA using Hilbert curves." } ]
[ { "text": "First and foremost, the model architecture is different: the network in Nguyen et al." }, { "text": "(2016) consists of two convolution layers followed by pooling layers, a fully connected layer and a sigmoid layer, while our model architecture is deeper, uses residual connections to reuse the learned features, and has no fully connected layers (see Methods)." }, { "text": "Second, while we use a spacefilling curve to transform the sequence data into an image-like tensor, Nguyen et al. (2016) keep the sequential form of the input data." }, { "text": "Apart from Elgin (2012), the only example we are aware of where Hilbert curves were used to map DNA sequence into two-dimensional space is from Anders (2009), who demonstrated the power of Hilbert curves for visualizing DNA." }, { "text": "Beyond our theoretical considerations, these last two studies suggest there are practical benefits of mapping DNA using Hilbert curves." } ]
ByPdC86-z
BJFChVamf
2
ByPdC86-z.BJFChVamf.02
Our contributions are twofold. First, we predict chromatin state using a CNN that, in terms of architecture, resembles conventional CNNs for image classification. Second, we propose a method to transform DNA sequence patches into two-dimensional image-like arrays to enhance the strengths of CNNs using space-filling curves, in particular the Hilbert curve. Our experiments demonstrate the benefits of our approach: the developed CNN outperforms all existing approaches for predicting the chromatin state in terms of both prediction accuracy and runtime, an improvement which is further enhanced by the convolution of DNA sequence to a 2D image. In summary, we present a novel, powerful way to harness the power of CNNs in image classification for predicting biologically relevant features from primary DNA sequence.
Our contributions are twofold. First, we predict chromatin state using a CNN that, in terms of architecture, resembles conventional CNNs for image classification and is designed for detecting distal relations. Second, we propose a method to transform DNA sequence patches into two-dimensional image-like arrays to enhance the strengths of CNNs using space-filling curves, in particular the Hilbert curve. Our experiments demonstrate the benefits of our approach: the developed CNN decisively outperforms all existing approaches for predicting the chromatin state in terms of prediction performance measures as well as runtime, an improvement which is further enhanced by the convolution of DNA sequence to a 2D image. In summary, we present a novel, powerful way to harness the power of CNNs in image classification for predicting biologically relevant features from primary DNA sequence.
[ { "text": "Our contributions are twofold." }, { "text": "First, we predict chromatin state using a CNN that, in terms of architecture, resembles conventional CNNs for image classification." }, { "text": "Second, we propose a method to transform DNA sequence patches into two-dimensional image-like arrays to enhance the strengths of CNNs using space-filling curves, in particular the Hilbert curve." }, { "text": "Our experiments demonstrate the benefits of our approach: the developed CNN outperforms all existing approaches for predicting the chromatin state in terms of both prediction accuracy and runtime, an improvement which is further enhanced by the convolution of DNA sequence to a 2D image." }, { "text": "In summary, we present a novel, powerful way to harness the power of CNNs in image classification for predicting biologically relevant features from primary DNA sequence." } ]
[ { "text": "Our contributions are twofold." }, { "text": "First, we predict chromatin state using a CNN that, in terms of architecture, resembles conventional CNNs for image classification and is designed for detecting distal relations." }, { "text": "Second, we propose a method to transform DNA sequence patches into two-dimensional image-like arrays to enhance the strengths of CNNs using space-filling curves, in particular the Hilbert curve." }, { "text": "Our experiments demonstrate the benefits of our approach: the developed CNN decisively outperforms all existing approaches for predicting the chromatin state in terms of prediction performance measures as well as runtime, an improvement which is further enhanced by the convolution of DNA sequence to a 2D image." }, { "text": "In summary, we present a novel, powerful way to harness the power of CNNs in image classification for predicting biologically relevant features from primary DNA sequence." } ]
H1zBhHYjH
rkwnW0isB
0
H1zBhHYjH.rkwnW0isB.00
Our Results We show that a heavy hitter oracle can greatly improve the complexity of a wide array of commonly studied problems in the data stream model, leading to the first optimal bounds for several important problems, and shattering lower bounds that have stood in the way of making further progress on important problems. Our algorithms not only give practically better, theoreticallygrounded improvements to these problems with an oracle, they also shed light on what exactly the difficulties are of making further progress on data stream problems without an oracle. We consider both perfect oracles and oracles that may sometimes make mistakes.
Our Results We show that a heavy hitter oracle can greatly improve the complexity of a wide array of commonly studied problems in the data stream model, leading to the first optimal bounds for several important problems, and shattering lower bounds that have stood in the way of making further progress on important problems. We note that not only do we give new algorithms, we also give several lower bounds for algorithms equipped with the oracle, that show optimality of our algorithms even with an oracle. Our algorithms not only give practically better, theoreticallygrounded improvements to these problems with an oracle, they also shed light on what exactly the difficulties are of making further progress on data stream problems without an oracle. We consider both perfect oracles and oracles that may sometimes make mistakes.
[ { "text": "Our Results We show that a heavy hitter oracle can greatly improve the complexity of a wide array of commonly studied problems in the data stream model, leading to the first optimal bounds for several important problems, and shattering lower bounds that have stood in the way of making further progress on important problems." }, { "text": "" }, { "text": "Our algorithms not only give practically better, theoreticallygrounded improvements to these problems with an oracle, they also shed light on what exactly the difficulties are of making further progress on data stream problems without an oracle." }, { "text": "We consider both perfect oracles and oracles that may sometimes make mistakes." } ]
[ { "text": "Our Results We show that a heavy hitter oracle can greatly improve the complexity of a wide array of commonly studied problems in the data stream model, leading to the first optimal bounds for several important problems, and shattering lower bounds that have stood in the way of making further progress on important problems." }, { "text": "We note that not only do we give new algorithms, we also give several lower bounds for algorithms equipped with the oracle, that show optimality of our algorithms even with an oracle." }, { "text": "Our algorithms not only give practically better, theoreticallygrounded improvements to these problems with an oracle, they also shed light on what exactly the difficulties are of making further progress on data stream problems without an oracle." }, { "text": "We consider both perfect oracles and oracles that may sometimes make mistakes." } ]
9GKjcdYYiw
roS0JdYXMT
0
9GKjcdYYiw.roS0JdYXMT.00
This was first seen in Wu and Goodman (2018), as the MVAE model with f defined as a product of experts (P O E), i.e. q Φ ( z | x , y ) = q φ x ( z | x ) q φ y ( z | y ) p ( z ) , allowing for cross-modality generation without extra modelling components. Particularly, the MVAE was constructed to cater to multimodal settings where data was not guaranteed to be organised into related sets, and where additional modalities were taken to be, in terms of information content, subsets of a primary data source—such as images and their class labels.
This was first seen in Wu and Goodman (2018), as the MVAE model with f defined as a product of experts (P O E), i.e. q Φ ( z | x , y ) = q φ x ( z | x ) q φ y ( z | y ) p ( z ) , allowing for cross-modality generation without extra modelling components. Particularly, the MVAE caters to settings where data was not guaranteed to be always related, and where additional modalities were, in terms of information content, subsets of a primary data source—such as images and their class labels.
[ { "text": "This was first seen in Wu and Goodman (2018), as the MVAE model with f defined as a product of experts (P O E), i.e. q Φ ( z | x , y )" }, { "text": "=" }, { "text": "q φ x ( z | x ) q φ y ( z | y ) p ( z ) , allowing for cross-modality generation without extra modelling components." }, { "text": "Particularly, the MVAE was constructed to cater to multimodal settings where data was not guaranteed to be organised into related sets, and where additional modalities were taken to be, in terms of information content, subsets of a primary data source—such as images and their class labels." } ]
[ { "text": "This was first seen in Wu and Goodman (2018), as the MVAE model with f defined as a product of experts (P O E), i.e. q Φ ( z | x , y )" }, { "text": "=" }, { "text": "q φ x ( z | x ) q φ y ( z | y ) p ( z ) , allowing for cross-modality generation without extra modelling components." }, { "text": "Particularly, the MVAE caters to settings where data was not guaranteed to be always related, and where additional modalities were, in terms of information content, subsets of a primary data source—such as images and their class labels." } ]
1oTXeWE60
tjALvt9x5x
0
1oTXeWE60.tjALvt9x5x.00
Neural network storage, inference, and training are computationally intensive due to the massive parameter size of neural networks. Therefore, developing a compression algorithm for machine learning models is necessary. Model quantization, based on the robustness of computational noise, is one of the most important compression techniques. The computational noise robustness measures the algorithm’s performance when noise is added during the computation process. The primary sources of noise are truncation and data type conversion mistakes. In the quantization process, the initial high-precision data type used for a model’s parameters is replaced with a lower-precision data type during model quantization. It is typical to replace FP32 with FP16, and both PyTorch and TensorFlow have quantization techniques that translate floats to integers. Various quantization techniques share the same theoretical foundation, which is the substitution of approximation data for the original data in the storage and inference processes. A lower-precision data format requires less memory, and using lower-precision data requires fewer computer resources and less time. In quantization, the precision loss in different quantization level conversions and data type conversions is the source of the noise.
Neural network storage, inference, and training are computationally intensive due to the massive parameter sizes of neural networks. Therefore, developing a compression algorithm for machine learning models is necessary. Model quantization, based on the robustness of computational noise, is one of the most important compression techniques. The primary sources of noise are truncation and data type conversion errors. In the quantization process, the initial high-precision data type used for a model’s parameters is replaced with a lower-precision data type. Both PyTorch and TensorFlow have quantization techniques that translate floats to integers. Various quantization techniques share the same theoretical foundation, which is the substitution of approximation data for the original data in the storage and inference processes. A lower-precision data format requires less memory, and using lower-precision data requires fewer computer resources and less time. In quantization, the precision loss in different quantization level conversions and data type conversions is the source of the noise.
[ { "text": "Neural network storage, inference, and training are computationally intensive due to the massive parameter size of neural networks." }, { "text": "Therefore, developing a compression algorithm for machine learning models is necessary." }, { "text": "Model quantization, based on the robustness of computational noise, is one of the most important compression techniques." }, { "text": "The computational noise robustness measures the algorithm’s performance when noise is added during the computation process." }, { "text": "The primary sources of noise are truncation and data type conversion mistakes." }, { "text": "In the quantization process, the initial high-precision data type used for a model’s parameters is replaced with a lower-precision data type during model quantization." }, { "text": "It is typical to replace FP32 with FP16, and both PyTorch and TensorFlow have quantization techniques that translate floats to integers." }, { "text": "Various quantization techniques share the same theoretical foundation, which is the substitution of approximation data for the original data in the storage and inference processes." }, { "text": "A lower-precision data format requires less memory, and using lower-precision data requires fewer computer resources and less time." }, { "text": "In quantization, the precision loss in different quantization level conversions and data type conversions is the source of the noise." } ]
[ { "text": "Neural network storage, inference, and training are computationally intensive due to the massive parameter sizes of neural networks." }, { "text": "Therefore, developing a compression algorithm for machine learning models is necessary." }, { "text": "Model quantization, based on the robustness of computational noise, is one of the most important compression techniques." }, { "text": "" }, { "text": "The primary sources of noise are truncation and data type conversion errors." }, { "text": "In the quantization process, the initial high-precision data type used for a model’s parameters is replaced with a lower-precision data type." }, { "text": "Both PyTorch and TensorFlow have quantization techniques that translate floats to integers." }, { "text": "Various quantization techniques share the same theoretical foundation, which is the substitution of approximation data for the original data in the storage and inference processes." }, { "text": "A lower-precision data format requires less memory, and using lower-precision data requires fewer computer resources and less time." }, { "text": "In quantization, the precision loss in different quantization level conversions and data type conversions is the source of the noise." } ]
1oTXeWE60
tjALvt9x5x
1
1oTXeWE60.tjALvt9x5x.01
The model quantization problem setting for inference processes will be established. Moreover, by focusing on layerwise post-training static model quantization, we present a method for acquiring a quantized model with a lower loss than the model with full precision. In addition, based on our analysis, we also prove that it is the nature of thecomputational noise robustness to be strong for neural networks that mainly consist of massive identity mapping, like ResNet or DenseNet.
Furthermore, we present a method for acquiring a quantized model with a lower loss than the model with full precision by using the floor and ceiling functions in different layers, with a focus on layerwise post-training static model quantization. As an added benefit in algorithm analysis, we give the theoretical result to answer the question that which types of models are stable in the quantization process and why when the noise introduced by quantization process can be covered by the neighborhood concept.
[ { "text": "The model quantization problem setting for inference processes will be established. Moreover, by focusing on layerwise post-training static model quantization, we present a method for acquiring a quantized model with a lower loss than the model with full precision. In addition, based on our analysis, we also prove that it is the nature of thecomputational noise robustness to be strong for neural networks that mainly consist of massive identity mapping, like ResNet or DenseNet." } ]
[ { "text": "Furthermore, we present a method for acquiring a quantized model with a lower loss than the model with full precision by using the floor and ceiling functions in different layers, with a focus on layerwise post-training static model quantization. As an added benefit in algorithm analysis, we give the theoretical result to answer the question that which types of models are stable in the quantization process and why when the noise introduced by quantization process can be covered by the neighborhood concept." } ]
Bk1n3K9KQ
BJEWgxtRX
0
Bk1n3K9KQ.BJEWgxtRX.00
EMA converges to limit cycles around the equilibrium with vanishing amplitude as the discount parameter approaches one. We establish experimentally that both techniques are strikingly effective in the non convex-concave GAN setting as well. Both improve inception and FID scores on different architectures and for different GAN objectives. We provide comprehensive experimental results across a range of datasets – mixture of Gaussians, CIFAR-10, STL-10, CelebA and ImageNet – to demonstrate its effectiveness. We achieve state-of-the-art results on CIFAR-10 and produce clean CelebA face images.
EMA converges to limit cycles around the equilibrium with vanishing amplitude as the discount parameter approaches one for simple bilinear games and moreover enhance the stability of general GAN training. We establish experimentally that both techniques are strikingly effective in the non convex-concave GAN setting as well. Both improve inception and FID scores on different architectures and for different GAN objectives. We provide comprehensive experimental results across a range of datasets – mixture of Gaussians, CIFAR-10, STL-10, CelebA and ImageNet – to demonstrate its effectiveness. We achieve state-of-the-art results on CIFAR-10 and produce clean CelebA face images.
[ { "text": "EMA converges to limit cycles around the equilibrium with vanishing amplitude as the discount parameter approaches one." }, { "text": "We establish experimentally that both techniques are strikingly effective in the non convex-concave GAN setting as well." }, { "text": "Both improve inception and FID scores on different architectures and for different GAN objectives." }, { "text": "We provide comprehensive experimental results across a range of datasets – mixture of Gaussians, CIFAR-10, STL-10, CelebA and ImageNet – to demonstrate its effectiveness." }, { "text": "We achieve state-of-the-art results on CIFAR-10 and produce clean CelebA face images." } ]
[ { "text": "EMA converges to limit cycles around the equilibrium with vanishing amplitude as the discount parameter approaches one for simple bilinear games and moreover enhance the stability of general GAN training." }, { "text": "We establish experimentally that both techniques are strikingly effective in the non convex-concave GAN setting as well." }, { "text": "Both improve inception and FID scores on different architectures and for different GAN objectives." }, { "text": "We provide comprehensive experimental results across a range of datasets – mixture of Gaussians, CIFAR-10, STL-10, CelebA and ImageNet – to demonstrate its effectiveness." }, { "text": "We achieve state-of-the-art results on CIFAR-10 and produce clean CelebA face images." } ]
End of preview.