+
+
+flop_sp_efficiency metric in nvprof to generate the plot. One GPU was used for this experiment.
+
+smoothness criteria and smaller number of interpolation nodes r improve speed.
++
+
+
figures/number_sorting_bigger/number_sorting_1000.pdf (45,48)(OOM)
+iWildCam dataset , which contains camera-trap images of animal species, the domains correspond to the different camera-traps which captured the images. (b) We relate training and test domains as draws from the same underlying (and often unknown) meta-distribution over domains $\bbQ$. (c) We consider a predictor’s estimated risk distribution over training domains, naturally-induced by $\bbQ$. By minimizing the α-quantile of this distribution, we learn predictors that perform well with high probability ( ≈ α) rather than on average or in the worst case.
+
+
+
+
+
+
++
+
+
+
++
+
+| + | Initial state | +Update rule | +Output rule | +Cost | +
|---|---|---|---|---|
| Naive RNN | +s0 = vector() |
+st = σ(θssst − 1 + θsxxt) | +zt = θzsst + θzxxt | +O(1) | +
| Self-attention | +s0 = list() |
+st = st − 1.append(kt, vt) |
+zt = Vtsoftmax(KtTqt) |
+O(t) | +
| Naive TTT | +W0 = f.params() |
+Wt = Wt − 1 − η∇ℓ(Wt − 1; xt) | +zt = f(xt; Wt) | +O(1) | +