question stringlengths 6 3.53k | text stringlengths 17 2.05k | source stringclasses 1
value |
|---|---|---|
(Infinite Data) Assume that your training data $\mathcal{S}=\left\{\left(\mathbf{x}_{n}, y_{n}\right)\right\}$ is iid and comes from a fixed distribution $\mathcal{D}$ that is unknown but is known to have bounded support. Assume that your family of models contains a finite number of elements and that you choose the bes... | In statistical learning models, the training sample ( x i , y i ) {\displaystyle (x_{i},y_{i})} are assumed to have been drawn from the true distribution p ( x , y ) {\displaystyle p(x,y)} and the objective is to minimize the expected "risk" I = E = ∫ V ( f ( x ) , y ) d p ( x , y ) . {\displaystyle I=\mathbb {E} =\i... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
(Infinite Data) Assume that your training data $\mathcal{S}=\left\{\left(\mathbf{x}_{n}, y_{n}\right)\right\}$ is iid and comes from a fixed distribution $\mathcal{D}$ that is unknown but is known to have bounded support. Assume that your family of models contains a finite number of elements and that you choose the bes... | However, intrinsic constraints (whether physical, theoretical, computational, etc.) will always play a limiting role. The limiting case where only a finite number of data points are selected over a broad sample space may result in improved precision and lower variance overall, but may also result in an overreliance on ... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
The purpose of this first exercise part is to ensure that the predictions produced by minimizing the true $\phi$-risk are optimal. As for the $0-1$ loss, it can be shown that the true $\phi$-risk is minimized at a predictor $g^\star:\mathcal X o \R$ satisfying for all $\xv\in\mathcal X$:
Let $b: \R o \R$ a f... | Fix a loss function L: Y × Y → R ≥ 0 {\displaystyle {\mathcal {L}}\colon Y\times Y\to \mathbb {R} _{\geq 0}} , for example, the square loss L ( y , y ′ ) = ( y − y ′ ) 2 {\displaystyle {\mathcal {L}}(y,y')=(y-y')^{2}} , where h ( x ) = y ′ {\displaystyle h(x)=y'} . For a given distribution ρ {\displaystyle \rho } on X ... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
The purpose of this first exercise part is to ensure that the predictions produced by minimizing the true $\phi$-risk are optimal. As for the $0-1$ loss, it can be shown that the true $\phi$-risk is minimized at a predictor $g^\star:\mathcal X o \R$ satisfying for all $\xv\in\mathcal X$:
Let $b: \R o \R$ a f... | Fix a loss function L: Y × Y → R ≥ 0 {\displaystyle {\mathcal {L}}\colon Y\times Y\to \mathbb {R} _{\geq 0}} , for example, the square loss L ( y , y ′ ) = ( y − y ′ ) 2 {\displaystyle {\mathcal {L}}(y,y')=(y-y')^{2}} , where h ( x ) = y ′ {\displaystyle h(x)=y'} . For a given distribution ρ {\displaystyle \rho } on X ... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following matrix-factorization problem. For the observed ratings $r_{u m}$ for a given pair $(u, m)$ of a user $u$ and a movie $m$, one typically tries to estimate the score by $$ f_{u m}=\left\langle\mathbf{v}_{u}, \mathbf{w}_{m}\right\rangle+b_{u}+b_{m} $$ Here $\mathbf{v}_{u}$ and $\mathbf{w}_{m}$ are v... | After the most like-minded users are found, their corresponding ratings are aggregated to identify the set of items to be recommended to the target user. The most important disadvantage of taking context into recommendation model is to be able to deal with larger dataset that contains much more missing values in compar... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following matrix-factorization problem. For the observed ratings $r_{u m}$ for a given pair $(u, m)$ of a user $u$ and a movie $m$, one typically tries to estimate the score by $$ f_{u m}=\left\langle\mathbf{v}_{u}, \mathbf{w}_{m}\right\rangle+b_{u}+b_{m} $$ Here $\mathbf{v}_{u}$ and $\mathbf{w}_{m}$ are v... | Specifically, the predicted rating user u will give to item i is computed as: r ~ u i = ∑ f = 0 n f a c t o r s H u , f W f , i {\displaystyle {\tilde {r}}_{ui}=\sum _{f=0}^{nfactors}H_{u,f}W_{f,i}} It is possible to tune the expressive power of the model by changing the number of latent factors. It has been demonstrat... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider a binary classification problem with classifier $f(\mathbf{x})$ given by $$ f(\mathbf{x})= \begin{cases}1, & g(\mathbf{x}) \geq 0 \\ -1, & g(\mathbf{x})<0\end{cases} $$ and $\mathbf{x} \in \mathbb{R}^{6}$. Consider a specific pair $(\mathbf{x}, y=1)$ and assume that $g(\mathbf{x})=8$. In particular this means ... | This addresses the question whether there is a systematic way to find a positive number β ( x , p ) {\displaystyle \beta (\mathbf {x} ,\mathbf {p} )} - depending on the function f, the point x {\displaystyle \mathbf {x} } and the descent direction p {\displaystyle \mathbf {p} } - so that all learning rates α ≤ β ( x , ... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider a binary classification problem with classifier $f(\mathbf{x})$ given by $$ f(\mathbf{x})= \begin{cases}1, & g(\mathbf{x}) \geq 0 \\ -1, & g(\mathbf{x})<0\end{cases} $$ and $\mathbf{x} \in \mathbb{R}^{6}$. Consider a specific pair $(\mathbf{x}, y=1)$ and assume that $g(\mathbf{x})=8$. In particular this means ... | Often f is a threshold function, which maps all values of w → ⋅ x → {\displaystyle {\vec {w}}\cdot {\vec {x}}} above a certain threshold to the first class and all other values to the second class; e.g., f ( x ) = { 1 if w T ⋅ x > θ , 0 otherwise {\displaystyle f(\mathbf {x} )={\begin{cases}1&{\text{if }}\ \mathbf {w} ... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
We are given a data set $S=\left\{\left(\boldsymbol{x}_{n}, y_{n}\right)\right\}$ for a binary classification task where $\boldsymbol{x}_{n}$ in $\mathbb{R}^{D}$. We want to use a nearestneighbor classifier. In which of the following situations do we have a reasonable chance of success with this approach? [Ignore the i... | The most intuitive nearest neighbour type classifier is the one nearest neighbour classifier that assigns a point x to the class of its closest neighbour in the feature space, that is C n 1 n n ( x ) = Y ( 1 ) {\displaystyle C_{n}^{1nn}(x)=Y_{(1)}} . As the size of training data set approaches infinity, the one nearest... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
We are given a data set $S=\left\{\left(\boldsymbol{x}_{n}, y_{n}\right)\right\}$ for a binary classification task where $\boldsymbol{x}_{n}$ in $\mathbb{R}^{D}$. We want to use a nearestneighbor classifier. In which of the following situations do we have a reasonable chance of success with this approach? [Ignore the i... | The k-nearest neighbour classifier can be viewed as assigning the k nearest neighbours a weight 1 / k {\displaystyle 1/k} and all others 0 weight. This can be generalised to weighted nearest neighbour classifiers. That is, where the ith nearest neighbour is assigned a weight w n i {\displaystyle w_{ni}} , with ∑ i = 1 ... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider a linear model $\hat{y} = xv ^ op \wv$ with the squared loss under an $\ell_\infty$-bounded adversarial perturbation. For a single point $(xv, y)$, it corresponds to the following objective:
egin{align}
\max_{ ilde{xv}:\ \|xv- ilde{xv}\|_\infty\leq \epsilon} \left(y... | and simply write ℓ ( θ ∣ X , Y ) = ∑ i = 1 m ( y i θ ′ x i − e θ ′ x i ) . {\displaystyle \ell (\theta \mid X,Y)=\sum _{i=1}^{m}\left(y_{i}\theta 'x_{i}-e^{\theta 'x_{i}}\right).} To find a maximum, we need to solve an equation ∂ ℓ ( θ ∣ X , Y ) ∂ θ = 0 {\displaystyle {\frac {\partial \ell (\theta \mid X,Y)}{\partial \... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider a linear model $\hat{y} = xv ^ op \wv$ with the squared loss under an $\ell_\infty$-bounded adversarial perturbation. For a single point $(xv, y)$, it corresponds to the following objective:
egin{align}
\max_{ ilde{xv}:\ \|xv- ilde{xv}\|_\infty\leq \epsilon} \left(y... | The subproblem considers the suggested solution y ¯ {\displaystyle \mathbf {\bar {y}} } to the master problem and solves the inner maximization problem from the minimax formulation. The inner problem is formulated using the dual representation maximize ( b − B y ¯ ) T u + d T y ¯ subject to A T u ≤ c u ≥ 0 {\displaysty... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In the following let $\kappa_{1}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ and $\kappa_{2}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ be two valid kernels. Show that the following are is a valid kernel: $\kappa\left(\mathbf{x}, \mathbf{x}^{\prime}\right)=a \kappa_{1}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)+b \... | Let κ 1 {\displaystyle \kappa ^{1}} be a s-finite kernel from S {\displaystyle S} to T {\displaystyle T} and κ 2 {\displaystyle \kappa ^{2}} a s-finite kernel from S × T {\displaystyle S\times T} to U {\displaystyle U} . Then the composition κ 1 ⋅ κ 2 {\displaystyle \kappa ^{1}\cdot \kappa ^{2}} of the two kernels is d... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In the following let $\kappa_{1}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ and $\kappa_{2}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ be two valid kernels. Show that the following are is a valid kernel: $\kappa\left(\mathbf{x}, \mathbf{x}^{\prime}\right)=a \kappa_{1}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)+b \... | For N {\displaystyle N} even, we define the Dirichlet kernel as D ( x , N ) = 1 N + 1 N cos 1 2 N x + 2 N ∑ k = 1 ( N − 1 ) / 2 cos ( k x ) = sin 1 2 N x N tan 1 2 x . {\displaystyle D(x,N)={\frac {1}{N}}+{\frac {1}{N}}\cos {\tfrac {1}{2}}Nx+{\frac {2}{N}}\sum _{k=1}^{(N-1)/2}\cos(kx)={\frac {\sin {\tfrac {1}{2... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Let $\mathbf{A}, \mathbf{B} \in \mathbb{R}^{n \times n}$ be two symmetric matrices. Assume that $\mathbf{v} \in \mathbb{R}^{n}$ is an eigenvector for both matrices with associated eigenvalues $\lambda_{A}$ and $\lambda_{B}$ respectively. Show that $\mathbf{v}$ is an eigenvector of the matrix $\mathbf{A}+\mathbf{B}$. Wh... | The matrix A = ( 3 2 0 2 0 0 1 0 2 ) {\displaystyle A={\begin{pmatrix}3&2&0\\2&0&0\\1&0&2\end{pmatrix}}} has eigenvalues and corresponding eigenvectors λ 1 = − 1 , b 1 = ( − 3 , 6 , 1 ) , {\displaystyle \lambda _{1}=-1,\quad \,\mathbf {b} _{1}=\left(-3,6,1\right),} λ 2 = 2 , b 2 = ( 0 , 0 , 1 ) , {\displaystyle \lambda... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Let $\mathbf{A}, \mathbf{B} \in \mathbb{R}^{n \times n}$ be two symmetric matrices. Assume that $\mathbf{v} \in \mathbb{R}^{n}$ is an eigenvector for both matrices with associated eigenvalues $\lambda_{A}$ and $\lambda_{B}$ respectively. Show that $\mathbf{v}$ is an eigenvector of the matrix $\mathbf{A}+\mathbf{B}$. Wh... | Given an n × n square matrix A of real or complex numbers, an eigenvalue λ and its associated generalized eigenvector v are a pair obeying the relation ( A − λ I ) k v = 0 , {\displaystyle \left(A-\lambda I\right)^{k}{\mathbf {v} }=0,} where v is a nonzero n × 1 column vector, I is the n × n identity matrix, k is a pos... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
You are given your $D \times N$ data matrix $\boldsymbol{X}$, where $D$ represents the dimension of the input space and $N$ is the number of samples. We discussed in the course the singular value decomposition (SVD). Recall that the SVD is not invariant to scaling and that empirically it is a good idea to remove the me... | They form two sets of orthonormal bases u1, ..., um and v1, ..., vn , and if they are sorted so that the singular values σ i {\displaystyle \ \sigma _{i}\ } with value zero are all in the highest-numbered columns (or rows), the singular value decomposition can be written as M = ∑ i = 1 r σ i u i v i ∗ , {\displaystyle ... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
You are given your $D \times N$ data matrix $\boldsymbol{X}$, where $D$ represents the dimension of the input space and $N$ is the number of samples. We discussed in the course the singular value decomposition (SVD). Recall that the SVD is not invariant to scaling and that empirically it is a good idea to remove the me... | Let X denote the d × n {\displaystyle d\times n} data matrix with column x i {\displaystyle x_{i}} as the image vector with mean subtracted. Then, c o v a r i a n c e ( X ) = X X T n {\displaystyle \mathrm {covariance} (X)={\frac {XX^{T}}{n}}} Let the singular value decomposition (SVD) of X be: X = U Σ V T {\displaysty... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Let us assume that a kernel $K: \mathcal{X} \times \mathcal{X} \rightarrow \mathbb{R}$ is said to be valid if there exists $k \in \mathbb{N}$ and $\Phi: \mathcal{X} \rightarrow \mathbb{R}^{k}$ such that for all $\left(x, x^{\prime}\right) \in \mathcal{X} \times \mathcal{X}, K\left(x, x^{\prime}\right)=\Phi(x)^{\top} \P... | kernels, then both and are p.d. kernels on X = X 1 × ⋯ × X n {\displaystyle {\mathcal {X}}={\mathcal {X}}_{1}\times \dots \times {\mathcal {X}}_{n}} . Let X 0 ⊂ X {\displaystyle {\mathcal {X}}_{0}\subset {\mathcal {X}}} . Then the restriction K 0 {\displaystyle K_{0}} of K {\displaystyle K} to X 0 × X 0 {\displaystyle ... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Let us assume that a kernel $K: \mathcal{X} \times \mathcal{X} \rightarrow \mathbb{R}$ is said to be valid if there exists $k \in \mathbb{N}$ and $\Phi: \mathcal{X} \rightarrow \mathbb{R}^{k}$ such that for all $\left(x, x^{\prime}\right) \in \mathcal{X} \times \mathcal{X}, K\left(x, x^{\prime}\right)=\Phi(x)^{\top} \P... | Moore initiated the study of a very general kind of p.d. kernel. If E {\displaystyle E} is an abstract set, he calls functions K ( x , y ) {\displaystyle K(x,y)} defined on E × E {\displaystyle E\times E} “positive Hermitian matrices” if they satisfy (1.1) for all x i ∈ E {\displaystyle x_{i}\in E} . | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Mark any of the following functions that have unique maximizers: | Often the functions to be minimized are not f i {\displaystyle f_{i}} but | f i − z i ∗ | {\displaystyle |f_{i}-z_{i}^{*}|} for some scalars z i ∗ {\displaystyle z_{i}^{*}} . Then f T c h b ( x , w ) = max i w i | f i ( x ) − z i ∗ | . {\displaystyle f_{Tchb}(x,w)=\max _{i}w_{i}|f_{i}(x)-z_{i}^{*}|.} All three function... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Mark any of the following functions that have unique maximizers: | If x ¯ ( s ) {\displaystyle {\bar {x}}(s)} is the unique maximizer of f ( ⋅ ; s ) {\displaystyle f(\cdot ;s)} , it suffices to show that f ′ ( x ¯ ( s ) ; s ′ ) ≥ 0 {\displaystyle f'({\bar {x}}(s);s')\geq 0} for any s ′ > s {\displaystyle s'>s} , which guarantees that x ¯ ( s ) {\displaystyle {\bar {x}}(s)} is increasi... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In the following let $\kappa_{1}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ and $\kappa_{2}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ be two valid kernels. Show that the following is also valid kernel: $\kappa\left(\mathbf{x}, \mathbf{x}^{\prime}\right)=\kappa_{1}\left(f(\mathbf{x}), f\left(\mathbf{x}^{\prime}\rig... | For N {\displaystyle N} even, we define the Dirichlet kernel as D ( x , N ) = 1 N + 1 N cos 1 2 N x + 2 N ∑ k = 1 ( N − 1 ) / 2 cos ( k x ) = sin 1 2 N x N tan 1 2 x . {\displaystyle D(x,N)={\frac {1}{N}}+{\frac {1}{N}}\cos {\tfrac {1}{2}}Nx+{\frac {2}{N}}\sum _{k=1}^{(N-1)/2}\cos(kx)={\frac {\sin {\tfrac {1}{2... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In the following let $\kappa_{1}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ and $\kappa_{2}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ be two valid kernels. Show that the following is also valid kernel: $\kappa\left(\mathbf{x}, \mathbf{x}^{\prime}\right)=\kappa_{1}\left(f(\mathbf{x}), f\left(\mathbf{x}^{\prime}\rig... | Let κ 1 {\displaystyle \kappa ^{1}} be a s-finite kernel from S {\displaystyle S} to T {\displaystyle T} and κ 2 {\displaystyle \kappa ^{2}} a s-finite kernel from S × T {\displaystyle S\times T} to U {\displaystyle U} . Then the composition κ 1 ⋅ κ 2 {\displaystyle \kappa ^{1}\cdot \kappa ^{2}} of the two kernels is d... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider a Generative Adversarial Network (GAN) which successfully produces images of goats. Which of the following statements is false?
| For example, a GAN trained on photographs can generate new photographs that look at least superficially authentic to human observers, having many realistic characteristics. Though originally proposed as a form of generative model for unsupervised learning, GANs have also proved useful for semi-supervised learning, full... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider a Generative Adversarial Network (GAN) which successfully produces images of goats. Which of the following statements is false?
| For example, a GAN trained on photographs can generate new photographs that look at least superficially authentic to human observers, having many realistic characteristics. Though originally proposed as a form of generative model for unsupervised learning, GANs have also proved useful for semi-supervised learning, full... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Which of the following probability distributions are members of the exponential family: | In probability and statistics, an exponential family is a parametric set of probability distributions of a certain form, specified below. This special form is chosen for mathematical convenience, including the enabling of the user to calculate expectations, covariances using differentiation based on some useful algebra... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Which of the following probability distributions are members of the exponential family: | Despite the analytical tractability of such distributions, they are in themselves usually not members of the exponential family. For example, the three-parameter Student's t distribution, beta-binomial distribution and Dirichlet-multinomial distribution are all predictive distributions of exponential-family distributio... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
How does the bias-variance decomposition of a ridge regression estimator compare with that of the ordinary least-squares estimator in general? | The bias–variance decomposition forms the conceptual basis for regression regularization methods such as Lasso and ridge regression. Regularization methods introduce bias into the regression solution that can reduce variance considerably relative to the ordinary least squares (OLS) solution. Although the OLS solution p... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
How does the bias-variance decomposition of a ridge regression estimator compare with that of the ordinary least-squares estimator in general? | The bias–variance decomposition forms the conceptual basis for regression regularization methods such as Lasso and ridge regression. Regularization methods introduce bias into the regression solution that can reduce variance considerably relative to the ordinary least squares (OLS) solution. Although the OLS solution p... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
(Alternating Least Squares \& Matrix Factorization) For optimizing a matrix factorization problem in the recommender systems setting, as the number of observed entries increases but all $K, N, D$ are kept constant, the computational cost of the matrix inversion in Alternating Least-Squares increases. | Many standard NMF algorithms analyze all the data together; i.e., the whole matrix is available from the start. This may be unsatisfactory in applications where there are too many data to fit into memory or where the data are provided in streaming fashion. One such use is for collaborative filtering in recommendation s... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
(Alternating Least Squares \& Matrix Factorization) For optimizing a matrix factorization problem in the recommender systems setting, as the number of observed entries increases but all $K, N, D$ are kept constant, the computational cost of the matrix inversion in Alternating Least-Squares increases. | Matrix factorization is a class of collaborative filtering algorithms used in recommender systems. Matrix factorization algorithms work by decomposing the user-item interaction matrix into the product of two lower dimensionality rectangular matrices. This family of methods became widely known during the Netflix prize c... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In the lecture on bias-variance decomposition we have seen that the true error can be decomposed into noise, bias and variance terms. What happens to the three terms for ridge regression when the regularization parameter $\lambda$ grows? Explain your answer. | The bias–variance decomposition forms the conceptual basis for regression regularization methods such as Lasso and ridge regression. Regularization methods introduce bias into the regression solution that can reduce variance considerably relative to the ordinary least squares (OLS) solution. Although the OLS solution p... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In the lecture on bias-variance decomposition we have seen that the true error can be decomposed into noise, bias and variance terms. What happens to the three terms for ridge regression when the regularization parameter $\lambda$ grows? Explain your answer. | The bias–variance decomposition forms the conceptual basis for regression regularization methods such as Lasso and ridge regression. Regularization methods introduce bias into the regression solution that can reduce variance considerably relative to the ordinary least squares (OLS) solution. Although the OLS solution p... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
(Convex III) Let $f, g: \mathbb{R} \rightarrow \mathbb{R}$ be two convex functions. Then $h=f \circ g$ is always convex. | Let f be a function from an interval I ⊆ R {\displaystyle I\subseteq \mathbb {R} } to R {\displaystyle \mathbb {R} } . If f is convex, then for any three points x, y, z in I, f ( x ) + f ( y ) + f ( z ) 3 + f ( x + y + z 3 ) ≥ 2 3 . {\displaystyle {\frac {f(x)+f(y)+f(z)}{3}}+f\left({\frac {x+y+z}{3}}\right)\geq {\frac... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
(Convex III) Let $f, g: \mathbb{R} \rightarrow \mathbb{R}$ be two convex functions. Then $h=f \circ g$ is always convex. | It can be generalized to any finite number n of points instead of 3, taken on the right-hand side k at a time instead of 2 at a time: Let f be a continuous function from an interval I ⊆ R {\displaystyle I\subseteq \mathbb {R} } to R {\displaystyle \mathbb {R} } . Then f is convex if and only if, for any integers n and ... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Let us consider a binary classification problem with a training set $S=\{ (\xv_n,y_n)\}_{n=1}^N$ such that:
\xv_n\in\R^D, ext{ and } y_n\in\{-1,1\}, ext{ for all } n=1,\cdots,N,
where $N,D$ are integers such that $N,D\geq1$.
We consider the Percep... | Consider the problem of binary classification: for inputs x, we want to determine whether they belong to one of two classes, arbitrarily labeled +1 and −1. We assume that the classification problem will be solved by a real-valued function f, by predicting a class label y = sign(f(x)). For many problems, it is convenien... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Let us consider a binary classification problem with a training set $S=\{ (\xv_n,y_n)\}_{n=1}^N$ such that:
\xv_n\in\R^D, ext{ and } y_n\in\{-1,1\}, ext{ for all } n=1,\cdots,N,
where $N,D$ are integers such that $N,D\geq1$.
We consider the Percep... | The output y of this transfer function is binary, depending on whether the input meets a specified threshold, θ. The "signal" is sent, i.e. the output is set to one, if the activation meets the threshold. y = { 1 if u ≥ θ 0 if u < θ {\displaystyle y={\begin{cases}1&{\text{if }}u\geq \theta \\0&{\text{if }}u<\theta \end... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
What is the gradient of $\boldsymbol{x}^{\top} \boldsymbol{W}^{\top} \boldsymbol{W} \boldsymbol{x}$ with respect to $\boldsymbol{x}$ (written as a vector)? | {\displaystyle {\boldsymbol {F}}={\begin{bmatrix}1&\gamma &0\\0&1&0\\0&0&1\end{bmatrix}}.} We can also write the deformation gradient as F = 1 + γ e 1 ⊗ e 2 . {\displaystyle {\boldsymbol {F}}={\boldsymbol {\mathit {1}}}+\gamma \mathbf {e} _{1}\otimes \mathbf {e} _{2}.} | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
What is the gradient of $\boldsymbol{x}^{\top} \boldsymbol{W}^{\top} \boldsymbol{W} \boldsymbol{x}$ with respect to $\boldsymbol{x}$ (written as a vector)? | By definition, the gradient of a scalar function f is ∇ f = ∑ i e i ∂ f ∂ q i = ∂ f ∂ x e 1 + ∂ f ∂ y e 2 + ∂ f ∂ z e 3 {\displaystyle \nabla f=\sum _{i}\mathbf {e} ^{i}{\frac {\partial f}{\partial q^{i}}}={\frac {\partial f}{\partial x}}\mathbf {e} ^{1}+{\frac {\partial f}{\partial y}}\mathbf {e} ^{2}+{\frac {\partial... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
(Minima) Convex functions over a convex set have a unique global minimum. | The following are useful properties of convex optimization problems: every local minimum is a global minimum; the optimal set is convex; if the objective function is strictly convex, then the problem has at most one optimal point.These results are used by the theory of convex minimization along with geometric notions f... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
(Minima) Convex functions over a convex set have a unique global minimum. | Suppose each subset has its own cost function. The minima of each of these cost functions can be found, as can the minima of the global cost function, restricted to the same subsets. If these minima match for each subset, then it's almost obvious that a global minimum can be picked not out of the full set of alternativ... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Which statement is true for linear regression? | This follows directly from the result quoted immediately above, and the fact that the regression coefficient relating the y t {\displaystyle y_{t}} ′s to the actually observed x t {\displaystyle x_{t}} ′s, in a simple linear regression, is given by β x = Cov Var . {\displaystyle \beta _{x}={\frac {\operatorname {... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Which statement is true for linear regression? | In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is called simple linear regression; for more than one, the process is called mult... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
(Nearest Neighbor) The training error of the 1-nearest neighbor classifier is zero. | There are many results on the error rate of the k nearest neighbour classifiers. The k-nearest neighbour classifier is strongly (that is for any joint distribution on ( X , Y ) {\displaystyle (X,Y)} ) consistent provided k := k n {\displaystyle k:=k_{n}} diverges and k n / n {\displaystyle k_{n}/n} converges to zero as... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
(Nearest Neighbor) The training error of the 1-nearest neighbor classifier is zero. | The most intuitive nearest neighbour type classifier is the one nearest neighbour classifier that assigns a point x to the class of its closest neighbour in the feature space, that is C n 1 n n ( x ) = Y ( 1 ) {\displaystyle C_{n}^{1nn}(x)=Y_{(1)}} . As the size of training data set approaches infinity, the one nearest... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Now let $\xv$ be a random vector distributed according to the uniform distribution over the finite centered dataset $\xv_1, . . . , \xv_N$ from above. %
Consider the problem of finding a unit vector, $\wv \in \R^D$, such that the random variable $\wv^ op \xx$ has \emph{maximal} variance. What is the variance of the ra... | Suppose Tn is a uniformly (locally) regular estimator of the parameter q. Then There exist independent random m-vectors Z θ ∼ N ( 0 , I q ( θ ) − 1 ) {\displaystyle \scriptstyle Z_{\theta }\,\sim \,{\mathcal {N}}(0,\,I_{q(\theta )}^{-1})} and Δθ such that n ( T n − q ( θ ) ) → d Z θ + Δ θ , {\displaystyle {\sqrt {n}}(T... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Now let $\xv$ be a random vector distributed according to the uniform distribution over the finite centered dataset $\xv_1, . . . , \xv_N$ from above. %
Consider the problem of finding a unit vector, $\wv \in \R^D$, such that the random variable $\wv^ op \xx$ has \emph{maximal} variance. What is the variance of the ra... | The probability density function of a CURV X ∼ U {\displaystyle X\sim \operatorname {U} } is given by the indicator function of its interval of support normalized by the interval's length: Of particular interest is the uniform distribution on the unit interval {\displaystyle } . Samples of any desired probability d... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider a linear regression model on a dataset which we split into a training set and a test set. After training, our model gives a mean-squared error of 0.1 on the training set and a mean-squared error of 5.3 on the test set. Recall that the mean-squared error (MSE) is given by:
$$MSE_{ extbf{w}}( ex... | In linear regression, there exist real response values y 1 , … , y n {\textstyle y_{1},\ldots ,y_{n}} , and n p-dimensional vector covariates x1, ..., xn. The components of the vector xi are denoted xi1, ..., xip. If least squares is used to fit a function in the form of a hyperplane ŷ = a + βTx to the data (xi, yi) 1 ... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider a linear regression model on a dataset which we split into a training set and a test set. After training, our model gives a mean-squared error of 0.1 on the training set and a mean-squared error of 5.3 on the test set. Recall that the mean-squared error (MSE) is given by:
$$MSE_{ extbf{w}}( ex... | In linear regression, there exist real response values y 1 , … , y n {\textstyle y_{1},\ldots ,y_{n}} , and n p-dimensional vector covariates x1, ..., xn. The components of the vector xi are denoted xi1, ..., xip. If least squares is used to fit a function in the form of a hyperplane ŷ = a + βTx to the data (xi, yi) 1 ... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
You are given two distributions over $\mathbb{R}$ : Uniform on the interval $[a, b]$ and Gaussian with mean $\mu$ and variance $\sigma^{2}$. Their respective probability density functions are $$ p_{\mathcal{U}}(y \mid a, b):=\left\{\begin{array}{ll} \frac{1}{b-a}, & \text { for } a \leq y \leq b, \\ 0 & \text { otherwi... | Next, consider the case of a normal distribution with unknown mean and unknown variance. The probability density function is then f ( y ; μ , σ ) = 1 2 π σ 2 e − ( y − μ ) 2 / 2 σ 2 . {\displaystyle f(y;\mu ,\sigma )={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{-(y-\mu )^{2}/2\sigma ^{2}}.} This is an exponential family whi... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
You are given two distributions over $\mathbb{R}$ : Uniform on the interval $[a, b]$ and Gaussian with mean $\mu$ and variance $\sigma^{2}$. Their respective probability density functions are $$ p_{\mathcal{U}}(y \mid a, b):=\left\{\begin{array}{ll} \frac{1}{b-a}, & \text { for } a \leq y \leq b, \\ 0 & \text { otherwi... | An overdispersed exponential family of distributions is a generalization of an exponential family and the exponential dispersion model of distributions and includes those families of probability distributions, parameterized by θ {\displaystyle {\boldsymbol {\theta }}} and τ {\displaystyle \tau } , whose density functio... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Which statement is true for the Mean Squared Error (MSE) loss MSE( $\mathbf{x}, y):=\left(f_{\mathbf{w}}(\mathbf{x})-y\right)^{2}$, with $f_{\mathrm{w}}$ a model parametrized by the weights $\mathbf{w}$ ? | }}\\&=\operatorname {E} _{\theta }\left\right)^{2}\right]+\left(\operatorname {E} _{\theta }-\theta \right)^{2}\\&=\operatorname {Var} _{\theta }({\hat {\theta }})+\operatorname {Bias} _{\theta }({\hat {\theta }},\theta )^{2}\end{aligned}}} An even shorter proof can be achieved using the well-known formula that for a r... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Which statement is true for the Mean Squared Error (MSE) loss MSE( $\mathbf{x}, y):=\left(f_{\mathbf{w}}(\mathbf{x})-y\right)^{2}$, with $f_{\mathrm{w}}$ a model parametrized by the weights $\mathbf{w}$ ? | A popular example for a loss function is the squared error loss L ( θ , δ ) = ‖ θ − δ ‖ 2 {\displaystyle L(\theta ,\delta )=\|\theta -\delta \|^{2}\,\!} , and the risk function for this loss is the mean squared error (MSE). Unfortunately, in general, the risk cannot be minimized since it depends on the unknown paramete... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Let us remind that we define the max-margin $M_\star$ as
egin{align*}
M_\star = \max_{\wv\in\mathbb R^D, \| \wv\|_2=1} M ext{ such that } y_n \xv_n^ op \wv \geq M ext{ for } n=1,\cdots, N
\end{align*}
and a max-margin separating hyperplane $ar \wv$ as a solution... | We want to find the maximum-margin hyperplane that divides the points having y i = 1 {\displaystyle y_{i}=1} from those having y i = − 1 {\displaystyle y_{i}=-1} . Any hyperplane can be written as the set of points x {\displaystyle \mathbf {x} } satisfying w ⋅ x − b = 0 , {\displaystyle \mathbf {w} \cdot \mathbf {x} -b... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Let us remind that we define the max-margin $M_\star$ as
egin{align*}
M_\star = \max_{\wv\in\mathbb R^D, \| \wv\|_2=1} M ext{ such that } y_n \xv_n^ op \wv \geq M ext{ for } n=1,\cdots, N
\end{align*}
and a max-margin separating hyperplane $ar \wv$ as a solution... | If such a hyperplane exists, it is known as the maximum-margin hyperplane and the linear classifier it defines is known as a maximum margin classifier. More formally, given some training data D {\displaystyle {\mathcal {D}}} , a set of n points of the form D = { ( x i , y i ) ∣ x i ∈ R p , y i ∈ { − 1 , 1 } } i = 1 n {... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
An expression is referentially transparent if it always returns the same value, no matter
the global state of the program. A referentially transparent expression can be replaced by its value without
changing the result of the program.
Say we have a value representing a class of students and their GPAs. Given the follow... | Clearly, replacing x=x * 10 with either 10 or 100 gives a program a different meaning, and so the expression is not referentially transparent. In fact, assignment statements are never referentially transparent. Now, consider another function such as int plusone(int x) {return x+1;} is transparent, as it does not implic... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
An expression is referentially transparent if it always returns the same value, no matter
the global state of the program. A referentially transparent expression can be replaced by its value without
changing the result of the program.
Say we have a value representing a class of students and their GPAs. Given the follow... | I call a mode of containment φ referentially transparent if, whenever an occurrence of a singular term t is purely referential in a term or sentence ψ(t), it is purely referential also in the containing term or sentence φ(ψ(t)). The term appeared in its contemporary computer science usage in the discussion of variables... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
How many time is call compute printed when running the following code? def compute(n: Int) = \t printf("call compute") \t n + 1 LazyList.from(0).drop(2).take(3).map(compute) | Its running time is O ( r ) {\displaystyle O(r)} , but, since lazy evaluation is used, the computation is delayed until the results is forced by the computation. The list s in the data structure has two purposes. This list serves as a counter for | f | − | r | {\displaystyle |f|-|r|} , indeed, | f | = | r | {\displayst... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
How many time is call compute printed when running the following code? def compute(n: Int) = \t printf("call compute") \t n + 1 LazyList.from(0).drop(2).take(3).map(compute) | Almost all calling conventions—the ways in which subroutines receive their parameters and return results—use a special stack (the "call stack") to hold information about procedure/function calling and nesting in order to switch to the context of the called function and restore to the caller function when the callin... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following algorithm \textsc{Random-Check} that takes as input two subsets $S\subseteq E$ and $T\subseteq E$ of the same ground set $E$. \begin{center} \begin{boxedminipage}[t]{0.85\textwidth} \textsc{Random-Check}$(S,T)$ \\[2mm] 1. For each element $e\in E$, independently of other elements randomly set \b... | Obviously the result of the comparison always has a probability of error. So the task is similar with finding the minimum in a set of element using noisy comparisons. There are a lot of classical algorithms in order to achieve this goal. The most recent one which achieves the best guarantees was proposed by Daskalakis ... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following algorithm \textsc{Random-Check} that takes as input two subsets $S\subseteq E$ and $T\subseteq E$ of the same ground set $E$. \begin{center} \begin{boxedminipage}[t]{0.85\textwidth} \textsc{Random-Check}$(S,T)$ \\[2mm] 1. For each element $e\in E$, independently of other elements randomly set \b... | By Yao's principle, it also applies to the expected number of comparisons for a randomized algorithm on its worst-case input. For deterministic algorithms, it has been shown that selecting the k {\displaystyle k} th element requires ( 1 + H ( k / n ) ) n + Ω ( n ) {\displaystyle {\bigl (}1+H(k/n){\bigr )}n+\Omega ({\sq... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
A multiset is an unordered collection where elements can appear multiple times. We will represent a multiset of Char elements as a function from Char to Int: the function returns 0 for any Char argument that is not in the multiset, and the (positive) number of times it appears otherwise: type Multiset = Char => Int Wha... | A multiset may be formally defined as an ordered pair (A, m) where A is the underlying set of the multiset, formed from its distinct elements, and m: A → Z + {\displaystyle m\colon A\to \mathbb {Z} ^{+}} is a function from A to the set of positive integers, giving the multiplicity – that is, the number of occurrences –... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
A multiset is an unordered collection where elements can appear multiple times. We will represent a multiset of Char elements as a function from Char to Int: the function returns 0 for any Char argument that is not in the multiset, and the (positive) number of times it appears otherwise: type Multiset = Char => Int Wha... | The multiset construction, denoted A = M { B } {\displaystyle {\mathcal {A}}={\mathfrak {M}}\{{\mathcal {B}}\}} is a generalization of the set construction. In the set construction, each element can occur zero or one times. In a multiset, each element can appear an arbitrary number of times. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Ignoring their different evaluation characteristics in this exercise, we consider here
that filter and withFilter are equivalent. To which expression is the following for-loop translated ?
1 def mystery7(xs : List[Int], ys : List[Int]) : List[Int] =
2 for
3 y <- ys if y < 100
4 x <- xs if x < 20
5 yield
6 if y < x then... | Instead of the Java "foreach" loops for looping through an iterator, Scala has for-expressions, which are similar to list comprehensions in languages such as Haskell, or a combination of list comprehensions and generator expressions in Python. For-expressions using the yield keyword allow a new collection to be generat... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Ignoring their different evaluation characteristics in this exercise, we consider here
that filter and withFilter are equivalent. To which expression is the following for-loop translated ?
1 def mystery7(xs : List[Int], ys : List[Int]) : List[Int] =
2 for
3 y <- ys if y < 100
4 x <- xs if x < 20
5 yield
6 if y < x then... | A for-loop is generally equivalent to a while-loop: factorial := 1 for counter from 2 to 5 factorial := factorial * counter counter := counter - 1 print counter + "! equals " + factorial is equivalent to: factorial := 1 counter := 1 while counter < 5 counter := counter + 1 factorial := factorial * counter print counter... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the LP-rounding algorithm for Set Cover that works as follows: \begin{enumerate} \item Solve the LP relaxation to obtain an optimal solution $x^*$. \item Return the solution $\{S: x^*_S >0\}$, i.e., containing all sets with a positive value in the fractional solution. \end{enumerate} Use the complementarity sl... | One can turn the linear programming relaxation for this problem into an approximate solution of the original unrelaxed set cover instance via the technique of randomized rounding (Raghavan & Tompson 1987). Given a fractional cover, in which each set Si has weight wi, choose randomly the value of each 0–1 indicator vari... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the LP-rounding algorithm for Set Cover that works as follows: \begin{enumerate} \item Solve the LP relaxation to obtain an optimal solution $x^*$. \item Return the solution $\{S: x^*_S >0\}$, i.e., containing all sets with a positive value in the fractional solution. \end{enumerate} Use the complementarity sl... | The cover generated by this technique has total size, with high probability, (1+o(1))(ln n)W, where W is the total weight of the fractional solution. Thus, this technique leads to a randomized approximation algorithm that finds a set cover within a logarithmic factor of the optimum. As Young (1995) showed, both the ran... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Show that, given a matroid $\mathcal{M} = (E, \mathcal{I})$ and a weight function $w: E \rightarrow \mathbb{R}$,~\textsc{Greedy} (as defined in the lecture notes) always returns a base of the matroid. | In combinatorics, a branch of mathematics, a weighted matroid is a matroid endowed with function with respect to which one can perform a greedy algorithm. A weight function w: E → R + {\displaystyle w:E\rightarrow \mathbb {R} ^{+}} for a matroid M = ( E , I ) {\displaystyle M=(E,I)} assigns a strictly positive weight t... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Show that, given a matroid $\mathcal{M} = (E, \mathcal{I})$ and a weight function $w: E \rightarrow \mathbb{R}$,~\textsc{Greedy} (as defined in the lecture notes) always returns a base of the matroid. | A weighted matroid is a matroid together with a function from its elements to the nonnegative real numbers. The weight of a subset of elements is defined to be the sum of the weights of the elements in the subset. The greedy algorithm can be used to find a maximum-weight basis of the matroid, by starting from the empty... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
& \multicolumn{3}{c}{ extbf{ProofWriter}} & \multicolumn{3}{c}{ extbf{CLUTRR-SG}} \
\cmidrule(lr){2-4} \cmidrule(lr){5-7}
Consider the following code snippet:
1 type Logger[T] = T => Unit
2 def log[T](s: T)(using log: Logger[T]): Unit = log(s)
3 var count = 0
4 given countingLogger: Logger[String] = s... | Principal value forms: Log ( 1 ) = 0 {\displaystyle \operatorname {Log} (1)=0} Log ( e ) = 1 {\displaystyle \operatorname {Log} (e)=1} Multiple value forms, for any k an integer: log ( 1 ) = 0 + 2 π i k {\displaystyle \log(1)=0+2\pi ik} log ( e ) = 1 + 2 π i k {\displaystyle \log(e)=1+2\pi ik} | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
& \multicolumn{3}{c}{ extbf{ProofWriter}} & \multicolumn{3}{c}{ extbf{CLUTRR-SG}} \
\cmidrule(lr){2-4} \cmidrule(lr){5-7}
Consider the following code snippet:
1 type Logger[T] = T => Unit
2 def log[T](s: T)(using log: Logger[T]): Unit = log(s)
3 var count = 0
4 given countingLogger: Logger[String] = s... | log ( x + y ) = log ( x + x ⋅ y / x ) = log ( x + x ⋅ exp ( log ( y / x ) ) ) = log ( x ⋅ ( 1 + exp ( log ( y ) − log ( x ) ) ) ) = log ( x ) + log ( 1 + exp ( log ( y ) − log ( x ) ) ) = x ′ + log ( 1 + exp ( y ′ − x ′ ) ) {\displaystyle {\begin{aligned}&\log(x+y)\\={}&\log(x+x\cdot y/x... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider an array $A[1,\ldots, n]$ consisting of the $n$ distinct numbers $1,2, \ldots, n$. We are further guaranteed that $A$ is almost sorted in the following sense: $A[i] \neq i$ for at most $\sqrt{n}$ values of $i$. What are tight asymptotic worst-case running times for Insertion Sort and Merge Sort on such instan... | Consider performing insertion sort on n {\displaystyle n} numbers on a random access machine. The best-case for the algorithm is when the numbers are already sorted, which takes O ( n ) {\displaystyle O(n)} steps to perform the task. However, the input in the worst-case for the algorithm is when the numbers are reverse... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider an array $A[1,\ldots, n]$ consisting of the $n$ distinct numbers $1,2, \ldots, n$. We are further guaranteed that $A$ is almost sorted in the following sense: $A[i] \neq i$ for at most $\sqrt{n}$ values of $i$. What are tight asymptotic worst-case running times for Insertion Sort and Merge Sort on such instan... | This results in a worst case of O(n²) time for this sorting algorithm. This worst case occurs when the algorithm operates on an already sorted set, or one that is nearly sorted, reversed or nearly reversed. Expected O(n log n) time can however be achieved by shuffling the array, but this does not help for equal items. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Is “type-directed programming” a language mechanism that infers types from values? | It is through recognition of the eventual reduction of expressions to implicitly typed atomic values that the compiler for a type inferring language is able to compile a program completely without type annotations. In complex forms of higher-order programming and polymorphism, it is not always possible for the compiler... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Is “type-directed programming” a language mechanism that infers types from values? | The majority of them use a simple form of type inference; the Hindley-Milner type system can provide more complete type inference. The ability to infer types automatically makes many programming tasks easier, leaving the programmer free to omit type annotations while still permitting type checking. In some programming ... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following linear program for finding a maximum-weight matching: \begin{align*} \text{Maximize} \quad &\sum_{e\in E} x_e w_e\\ \text{Subject to} \quad &\sum_{e \in \delta(v)} x_e \leq 1 \quad \forall v \in V \\ &x_e \geq 0 \quad \forall e \in E \end{align*} (This is similar to the perfect matching problem... | Suppose each edge on the graph has a weight. A fractional matching of maximum weight in a graph can be found by linear programming. In a bipartite graph, it is possible to convert a maximum-weight fractional matching to a maximum-weight integral matching of the same size, in the following way: Let f be the fractional m... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following linear program for finding a maximum-weight matching: \begin{align*} \text{Maximize} \quad &\sum_{e\in E} x_e w_e\\ \text{Subject to} \quad &\sum_{e \in \delta(v)} x_e \leq 1 \quad \forall v \in V \\ &x_e \geq 0 \quad \forall e \in E \end{align*} (This is similar to the perfect matching problem... | In bipartite graphs, if a single maximum-cardinality matching is known, it is possible to find all maximally matchable edges in linear time - O ( V + E ) {\displaystyle O(V+E)} .If a maximum matching is not known, it can be found by existing algorithms. In this case, the resulting overall runtime is O ( V 1 / 2 E ) {\d... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Recall the online bin-packing problem that we saw in Exercise Set $10$: We are given an unlimited number of bins, each of capacity $1$. We get a sequence of items one by one each having a size of at most $1$, and are required to place them into bins as we receive them. Our goal is to minimize the number of bins we use,... | Several approximation algorithms for the general bin-packing problem use the following scheme: Separate the items to "small" (smaller than eB, for some fraction e in (0,1)) and "large" (at least eB). Handle the large items first: Round the item sizes in some way, such that the number of different sizes is at most some ... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Recall the online bin-packing problem that we saw in Exercise Set $10$: We are given an unlimited number of bins, each of capacity $1$. We get a sequence of items one by one each having a size of at most $1$, and are required to place them into bins as we receive them. Our goal is to minimize the number of bins we use,... | However, if space sharing fits into a hierarchy, as is the case with memory sharing in virtual machines, the bin packing problem can be efficiently approximated. Another variant of bin packing of interest in practice is the so-called online bin packing. Here the items of different volume are supposed to arrive sequenti... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In class we saw that Karger's min-cut algorithm implies that an undirected graph has at most $n \choose 2$ minimum cuts. Show that this result is tight by giving a graph with $n$ vertices and $n \choose 2$ minimum cuts. | The minimum cut problem in undirected, weighted graphs limited to non-negative weights can be solved in polynomial time by the Stoer-Wagner algorithm. In the special case when the graph is unweighted, Karger's algorithm provides an efficient randomized method for finding the cut. In this case, the minimum cut equals th... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In class we saw that Karger's min-cut algorithm implies that an undirected graph has at most $n \choose 2$ minimum cuts. Show that this result is tight by giving a graph with $n$ vertices and $n \choose 2$ minimum cuts. | This reduces the complexity to O ( n 4 ) {\displaystyle O(n^{4})} and is sound since, if a cut of capacity less than k exists, it is bound to separate u from some other vertex. It can be further improved by an algorithm of Gabow that runs in worst case O ( n 3 ) {\displaystyle O(n^{3})} time. The Karger–Stein variant o... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In this problem, we give a $2$-approximation algorithm for the submodular vertex cover problem which is a generalization of the classic vertex cover problem seen in class. We first, in subproblem~\textbf{(a)}, give a new rounding for the classic vertex cover problem and then give the algorithm for the more general pr... | Assume that every vertex has an associated cost of c ( v ) ≥ 0 {\displaystyle c(v)\geq 0} . The (weighted) minimum vertex cover problem can be formulated as the following integer linear program (ILP). This ILP belongs to the more general class of ILPs for covering problems. The integrality gap of this ILP is 2 {\displa... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In this problem, we give a $2$-approximation algorithm for the submodular vertex cover problem which is a generalization of the classic vertex cover problem seen in class. We first, in subproblem~\textbf{(a)}, give a new rounding for the classic vertex cover problem and then give the algorithm for the more general pr... | The vertex cover problem involves finding a set of vertices that touches every edge of the graph. It is NP-hard but can be approximated to within an approximation ratio of two, for instance by taking the endpoints of the matched edges in any maximal matching. Evidence that this is the best possible approximation ratio ... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Given the following classes:
• class Pair[+U, +V]
• class Iterable[+U]
• class Map[U, +V] extends Iterable[Pair[U, V]]
Recall that + means covariance, - means contravariance and no annotation means invariance (i.e. neither
covariance nor contravariance).
Consider also the following typing relationships for A, B, X, and... | Subtyping and inheritance are independent (orthogonal) relationships. They may coincide, but none is a special case of the other. In other words, between two types S and T, all combinations of subtyping and inheritance are possible: S is neither a subtype nor a derived type of T S is a subtype but is not a derived type... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Given the following classes:
• class Pair[+U, +V]
• class Iterable[+U]
• class Map[U, +V] extends Iterable[Pair[U, V]]
Recall that + means covariance, - means contravariance and no annotation means invariance (i.e. neither
covariance nor contravariance).
Consider also the following typing relationships for A, B, X, and... | Sound structural subtyping rules for types other than object types are also well known.Implementations of programming languages with subtyping fall into two general classes: inclusive implementations, in which the representation of any value of type A also represents the same value at type B if A <: B, and coercive imp... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In the following problem Alice holds a string $x = \langle x_1, x_2, \ldots, x_n \rangle$ and Bob holds a string $y = \langle y_1, y_2, \ldots, y_n\rangle$. Both strings are of length $n$ and $x_i, y_i \in \{1,2,\ldots, n\}$ for $i=1,2, \ldots, n$. The goal is for Alice and Bob to use little communication to estimate ... | The algorithm can be repeated many times to increase its accuracy. This fits the requirements for a randomized communication algorithm. This shows that if Alice and Bob share a random string of length n, they can send one bit to each other to compute E Q ( x , y ) {\displaystyle EQ(x,y)} . In the next section, it is sh... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In the following problem Alice holds a string $x = \langle x_1, x_2, \ldots, x_n \rangle$ and Bob holds a string $y = \langle y_1, y_2, \ldots, y_n\rangle$. Both strings are of length $n$ and $x_i, y_i \in \{1,2,\ldots, n\}$ for $i=1,2, \ldots, n$. The goal is for Alice and Bob to use little communication to estimate ... | The starting point is a bipartite communication scenario where one of the parts (Alice) is handed a random string x {\displaystyle x} of n {\displaystyle n} bits. The second part, Bob, receives a random number k ∈ { 1 , . . | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Professor Ueli von Gruy\`{e}res worked hard last year to calculate the yearly cheese consumption of each individual in Switzerland. Specifically, let $U$ be the set of all persons in Switzerland. For each person $i\in U$, Ueli calculated the amount $w_i \in \mathbb{R}_{\geq 0}$ (in grams) of the yearly cheese consumpti... | A way simpler possibility comes to mind and it is just drawing a straight line between two points and coming up with all the relevant data graphically. However, even though it is clearly seen in the paper that the income perceived is rising by 100 francs per sample family, the food expenditure is definitely not decreas... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Professor Ueli von Gruy\`{e}res worked hard last year to calculate the yearly cheese consumption of each individual in Switzerland. Specifically, let $U$ be the set of all persons in Switzerland. For each person $i\in U$, Ueli calculated the amount $w_i \in \mathbb{R}_{\geq 0}$ (in grams) of the yearly cheese consumpti... | A way simpler possibility comes to mind and it is just drawing a straight line between two points and coming up with all the relevant data graphically. However, even though it is clearly seen in the paper that the income perceived is rising by 100 francs per sample family, the food expenditure is definitely not decreas... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
You have just started your prestigious and important job as the Swiss Cheese Minister. As it turns out, different fondues and raclettes have different nutritional values and different prices: \begin{center} \begin{tabular}{|l|l|l|l||l|} \hline Food & Fondue moitie moitie & Fondue a la tomate & Raclette & Requirement pe... | Swiss cheese model == References == | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
You have just started your prestigious and important job as the Swiss Cheese Minister. As it turns out, different fondues and raclettes have different nutritional values and different prices: \begin{center} \begin{tabular}{|l|l|l|l||l|} \hline Food & Fondue moitie moitie & Fondue a la tomate & Raclette & Requirement pe... | The nutritional value of cheese varies widely. Cottage cheese may consist of 4% fat and 11% protein while some whey cheeses are 15% fat and 11% protein, and triple-crème cheeses are 36% fat and 7% protein. In general, cheese is a rich source (20% or more of the Daily Value, DV) of calcium, protein, phosphorus, sodium a... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Suppose we have a universe $U$ of elements. For $A,B\subseteq U$, the Jaccard distance of $A,B$ is defined as $$ J(A,B)=\frac{|A\cap B|}{|A\cup B|}.$$ This definition is used in practice to calculate a notion of similarity of documents, webpages, etc. For example, suppose $U$ is the set of English words, and any set $A... | {\displaystyle J(A,B)={{|A\cap B|} \over {|A\uplus B|}}={{|A\cap B|} \over {|A|+|B|}}.} The Jaccard distance, which measures dissimilarity between sample sets, is complementary to the Jaccard coefficient and is obtained by subtracting the Jaccard coefficient from 1, or, equivalently, by dividing the difference of the s... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Suppose we have a universe $U$ of elements. For $A,B\subseteq U$, the Jaccard distance of $A,B$ is defined as $$ J(A,B)=\frac{|A\cap B|}{|A\cup B|}.$$ This definition is used in practice to calculate a notion of similarity of documents, webpages, etc. For example, suppose $U$ is the set of English words, and any set $A... | If μ {\displaystyle \mu } is a measure on a measurable space X {\displaystyle X} , then we define the Jaccard coefficient by J μ ( A , B ) = μ ( A ∩ B ) μ ( A ∪ B ) , {\displaystyle J_{\mu }(A,B)={{\mu (A\cap B)} \over {\mu (A\cup B)}},} and the Jaccard distance by d μ ( A , B ) = 1 − J μ ( A , B ) = μ ( A △ B ) μ ( A ... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Last year Professor Ueli von Gruy\`{e}res worked hard to to obtain an estimator $\Alg$ to estimate the total cheese consumption of fondue lovers in Switzerland. For a small $\epsilon >0$, his estimator \Alg only asks $3/\epsilon^2$ random persons and have the following guarantee: if we let $W$ denote the true answer... | This approach allows for more natural study of the asymptotic properties of the estimators. In the other interpretation (fixed design), the regressors X are treated as known constants set by a design, and y is sampled conditionally on the values of X as in an experiment. For practical purposes, this distinction is ofte... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Last year Professor Ueli von Gruy\`{e}res worked hard to to obtain an estimator $\Alg$ to estimate the total cheese consumption of fondue lovers in Switzerland. For a small $\epsilon >0$, his estimator \Alg only asks $3/\epsilon^2$ random persons and have the following guarantee: if we let $W$ denote the true answer... | Further, the standard error of the estimate is σ = α ^ − 1 n + O ( n − 1 ) {\displaystyle \sigma ={\frac {{\hat {\alpha }}-1}{\sqrt {n}}}+O(n^{-1})} . This estimator is equivalent to the popular Hill estimator from quantitative finance and extreme value theory.For a set of n integer-valued data points { x i } {\display... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Recall that a matroid $\mathcal{M} =(E, \mathcal{I} )$ is a partition matroid if $E$ is partitioned into \emph{disjoint} sets $E_1, E_2, ..., E_\ell$ and \[ \mathcal{I} = \lbrace X \subseteq E : |E_i \cap X | \leq k_i \mbox{ for } i=1,2,..., \ell \rbrace\,. \] Verify that this is indeed a matroid. | In mathematics, a partition matroid or partitional matroid is a matroid that is a direct sum of uniform matroids. It is defined over a base set in which the elements are partitioned into different categories. For each category, there is a capacity constraint - a maximum number of allowed elements from this category. Th... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Recall that a matroid $\mathcal{M} =(E, \mathcal{I} )$ is a partition matroid if $E$ is partitioned into \emph{disjoint} sets $E_1, E_2, ..., E_\ell$ and \[ \mathcal{I} = \lbrace X \subseteq E : |E_i \cap X | \leq k_i \mbox{ for } i=1,2,..., \ell \rbrace\,. \] Verify that this is indeed a matroid. | A matroid sum ∑ i M i {\displaystyle \sum _{i}M_{i}} (where each M i {\displaystyle M_{i}} is a matroid) is itself a matroid, having as its elements the union of the elements of the summands. A set is independent in the sum if it can be partitioned into sets that are independent within each summand. The matroid partiti... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In class, we saw Karger's beautiful randomized algorithm for finding a minimum cut in an undirected graph $G=(V,E)$. Recall that his algorithm works by repeatedly contracting a randomly selected edge until the graph only consists of two vertices which define the returned cut. For general graphs, we showed that the retu... | All other edges connecting either u {\displaystyle u} or v {\displaystyle v} are "reattached" to the merged node, effectively producing a multigraph. Karger's basic algorithm iteratively contracts randomly chosen edges until only two nodes remain; those nodes represent a cut in the original graph. By iterating this bas... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In class, we saw Karger's beautiful randomized algorithm for finding a minimum cut in an undirected graph $G=(V,E)$. Recall that his algorithm works by repeatedly contracting a randomly selected edge until the graph only consists of two vertices which define the returned cut. For general graphs, we showed that the retu... | The minimum cut problem in undirected, weighted graphs limited to non-negative weights can be solved in polynomial time by the Stoer-Wagner algorithm. In the special case when the graph is unweighted, Karger's algorithm provides an efficient randomized method for finding the cut. In this case, the minimum cut equals th... | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.