question
stringlengths
6
3.53k
text
stringlengths
17
2.05k
source
stringclasses
1 value
Let $f(x, y)$ be a general function over $\mathbb{R}^{2}$. Mark any of the following statements that is always (independent of the function) correct?
Suppose f is a strong one-way function. Define g(x, r) = (f(x), r) where |r| = 2|x|.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let $f(x, y)$ be a general function over $\mathbb{R}^{2}$. Mark any of the following statements that is always (independent of the function) correct?
If f and g are functions, then: ( f g ) ′ = f ′ g − g ′ f g 2 {\displaystyle \left({\frac {f}{g}}\right)'={\frac {f'g-g'f}{g^{2}}}\quad } wherever g is nonzero.This can be derived from the product rule and the reciprocal rule.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
[Gradient for convolutional neural nets] Let $f(x, y, z, u, v, w)=3 x y z u v w+x^{2} y^{2} w^{2}-7 x z^{5}+3 y v w^{4}$. What is $$ \left.\left[\frac{\partial f}{\partial x}+\frac{\partial f}{\partial y}+\frac{\partial f}{\partial z}+\frac{\partial f}{\partial u}+\frac{\partial f}{\partial v}+\frac{\partial f}{\partia...
In a rectangular coordinate system, the gradient is given by ∇ f = ∂ f ∂ x i + ∂ f ∂ y j {\displaystyle \nabla f={\frac {\partial f}{\partial x}}\mathbf {i} +{\frac {\partial f}{\partial y}}\mathbf {j} }
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
[Gradient for convolutional neural nets] Let $f(x, y, z, u, v, w)=3 x y z u v w+x^{2} y^{2} w^{2}-7 x z^{5}+3 y v w^{4}$. What is $$ \left.\left[\frac{\partial f}{\partial x}+\frac{\partial f}{\partial y}+\frac{\partial f}{\partial z}+\frac{\partial f}{\partial u}+\frac{\partial f}{\partial v}+\frac{\partial f}{\partia...
By definition, the gradient of a scalar function f is ∇ f = ∑ i e i ∂ f ∂ q i = ∂ f ∂ x e 1 + ∂ f ∂ y e 2 + ∂ f ∂ z e 3 {\displaystyle \nabla f=\sum _{i}\mathbf {e} ^{i}{\frac {\partial f}{\partial q^{i}}}={\frac {\partial f}{\partial x}}\mathbf {e} ^{1}+{\frac {\partial f}{\partial y}}\mathbf {e} ^{2}+{\frac {\partial...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let $\xv, \wv, \deltav \in \R^d$, $y \in \{-1, 1\}$, and $ arepsilon \in \R_{>0}$ be an arbitrary positive value. Which of the following is NOT true in general:
It is therefore enough to prove positivity of the Jacobian when a = 0. In that case J f ( 0 ) = | a 1 | 2 − | a − 1 | 2 , {\displaystyle \displaystyle {J_{f}(0)=|a_{1}|^{2}-|a_{-1}|^{2},}} where the an are the Fourier coefficients of f: a n = 1 2 π ∫ 0 2 π f ( e i θ ) e − i n θ d θ . {\displaystyle \displaystyle {a_{n}...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let $\xv, \wv, \deltav \in \R^d$, $y \in \{-1, 1\}$, and $ arepsilon \in \R_{>0}$ be an arbitrary positive value. Which of the following is NOT true in general:
Hence f ( x ) − f ( r x ) 1 − r = − f ( r x ) 1 − r ≥ − 1 ( 1 + r ) n − 1 f ( 0 ) > − f ( 0 ) 2 n − 1 > 0. {\displaystyle \displaystyle {{f(x)-f(rx) \over 1-r}={-f(rx) \over 1-r}\geq -{1 \over (1+r)^{n-1}}f(0)>-{f(0) \over 2^{n-1}}>0.}} Hence the directional derivative at x is bounded below by the strictly positive con...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You have data with lots of outliers. Everything else being equal, and assuming that you do not do any pre-processing, what cost function will be less effected by these outliers?
One common approach to handle outliers in data analysis is to perform outlier detection first, followed by an efficient estimation method (e.g., the least squares). While this approach is often useful, one must keep in mind two challenges. First, an outlier detection method that relies on a non-robust initial fit can s...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You have data with lots of outliers. Everything else being equal, and assuming that you do not do any pre-processing, what cost function will be less effected by these outliers?
While PCA finds the mathematically optimal method (as in minimizing the squared error), it is still sensitive to outliers in the data that produce large errors, something that the method tries to avoid in the first place. It is therefore common practice to remove outliers before computing PCA. However, in some contexts...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a binary classification task as in Figure~\AMCref{fig:lr_data}, which consists of 14 two-dimensional linearly separable samples (circles corresponds to label $y=1$ and pluses corresponds to label $y=0$). We would like to predict the label $y=1$ of a sample $(x_1, x_2)$ when the following holds true ...
Consider the problem of binary classification: for inputs x, we want to determine whether they belong to one of two classes, arbitrarily labeled +1 and −1. We assume that the classification problem will be solved by a real-valued function f, by predicting a class label y = sign(f(x)). For many problems, it is convenien...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a binary classification task as in Figure~\AMCref{fig:lr_data}, which consists of 14 two-dimensional linearly separable samples (circles corresponds to label $y=1$ and pluses corresponds to label $y=0$). We would like to predict the label $y=1$ of a sample $(x_1, x_2)$ when the following holds true ...
Note that predictions can now be made according to y = 1 iff P ( y = 1 | x ) > 1 2 ; {\displaystyle y=1{\text{ iff }}P(y=1|x)>{\frac {1}{2}};} if B ≠ 0 , {\displaystyle B\neq 0,} the probability estimates contain a correction compared to the old decision function y = sign(f(x)).The parameters A and B are estimated usin...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The purpose of this first exercise part is to ensure that the predictions produced by minimizing the true $\phi$-risk are optimal. As for the $0-1$ loss, it can be shown that the true $\phi$-risk is minimized at a predictor $g^\star:\mathcal X o \R$ satisfying for all $\xv\in\mathcal X$: egin{align*} ...
Fix a loss function L: Y × Y → R ≥ 0 {\displaystyle {\mathcal {L}}\colon Y\times Y\to \mathbb {R} _{\geq 0}} , for example, the square loss L ( y , y ′ ) = ( y − y ′ ) 2 {\displaystyle {\mathcal {L}}(y,y')=(y-y')^{2}} , where h ( x ) = y ′ {\displaystyle h(x)=y'} . For a given distribution ρ {\displaystyle \rho } on X ...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The purpose of this first exercise part is to ensure that the predictions produced by minimizing the true $\phi$-risk are optimal. As for the $0-1$ loss, it can be shown that the true $\phi$-risk is minimized at a predictor $g^\star:\mathcal X o \R$ satisfying for all $\xv\in\mathcal X$: egin{align*} ...
Fix a loss function L: Y × Y → R ≥ 0 {\displaystyle {\mathcal {L}}\colon Y\times Y\to \mathbb {R} _{\geq 0}} , for example, the square loss L ( y , y ′ ) = ( y − y ′ ) 2 {\displaystyle {\mathcal {L}}(y,y')=(y-y')^{2}} , where h ( x ) = y ′ {\displaystyle h(x)=y'} . For a given distribution ρ {\displaystyle \rho } on X ...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
(Linear or Logistic Regression) Suppose you are given a dataset of tissue images from patients with and without a certain disease. You are supposed to train a model that predicts the probability that a patient has the disease. It is preferable to use logistic regression over linear regression.
Like other forms of regression analysis, logistic regression makes use of one or more predictor variables that may be either continuous or categorical. Unlike ordinary linear regression, however, logistic regression is used for predicting dependent variables that take membership in one of a limited number of categories...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
(Linear or Logistic Regression) Suppose you are given a dataset of tissue images from patients with and without a certain disease. You are supposed to train a model that predicts the probability that a patient has the disease. It is preferable to use logistic regression over linear regression.
Logistic regression is an alternative to Fisher's 1936 method, linear discriminant analysis. If the assumptions of linear discriminant analysis hold, the conditioning can be reversed to produce logistic regression. The converse is not true, however, because logistic regression does not require the multivariate normal a...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Show that the solution of the problem of $rgmax_{\wv:\|\wv\|=1} ext{Var}[\wv^ op \xx]$ is to set $\wv$ to be the first principle vector of $\xv_1, . . . , \xv_N$.
For brevity, we write W {\displaystyle W} for W ( y 1 , … , y n ) {\displaystyle W(y_{1},\ldots ,y_{n})} and omit the argument x {\displaystyle x} . It suffices to show that the Wronskian solves the first-order linear differential equation W ′ = − p n − 1 W , {\displaystyle W'=-p_{n-1}\,W,} because the remaining part o...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Show that the solution of the problem of $rgmax_{\wv:\|\wv\|=1} ext{Var}[\wv^ op \xx]$ is to set $\wv$ to be the first principle vector of $\xv_1, . . . , \xv_N$.
We claim that any such vector x = f ( v ) {\displaystyle x=f(v)} satisfies x T A x x T x ≥ 2 d − 1 ( 1 − 1 2 r ) . {\displaystyle {\frac {x^{\text{T}}Ax}{x^{\text{T}}x}}\geq 2{\sqrt {d-1}}\left(1-{\frac {1}{2r}}\right).}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a binary classification problem with classifier $f(\mathbf{x})$ given by $$ f(\mathbf{x})= \begin{cases}1, & g(\mathbf{x}) \geq 0 \\ -1, & g(\mathbf{x})<0\end{cases} $$ and $\mathbf{x} \in \mathbb{R}^{6}$. Consider a specific pair $(\mathbf{x}, y=1)$ and assume that $g(\mathbf{x})=8$. In particular this means ...
This addresses the question whether there is a systematic way to find a positive number β ( x , p ) {\displaystyle \beta (\mathbf {x} ,\mathbf {p} )} - depending on the function f, the point x {\displaystyle \mathbf {x} } and the descent direction p {\displaystyle \mathbf {p} } - so that all learning rates α ≤ β ( x , ...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a binary classification problem with classifier $f(\mathbf{x})$ given by $$ f(\mathbf{x})= \begin{cases}1, & g(\mathbf{x}) \geq 0 \\ -1, & g(\mathbf{x})<0\end{cases} $$ and $\mathbf{x} \in \mathbb{R}^{6}$. Consider a specific pair $(\mathbf{x}, y=1)$ and assume that $g(\mathbf{x})=8$. In particular this means ...
Often f is a threshold function, which maps all values of w → ⋅ x → {\displaystyle {\vec {w}}\cdot {\vec {x}}} above a certain threshold to the first class and all other values to the second class; e.g., f ( x ) = { 1 if w T ⋅ x > θ , 0 otherwise {\displaystyle f(\mathbf {x} )={\begin{cases}1&{\text{if }}\ \mathbf {w} ...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following statements is correct?
If statements 1 and 2 are true, it absolutely follows that statement 3 is true. However, it may still be the case that statement 1 or 2 is not true. For example: If Albert Einstein makes a statement about science, it is correct.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following statements is correct?
If the first statement is false, then the second is false, too. But if the second statement is false, then the first statement is true. It follows that if the first statement is false, then the first statement is true.The same mechanism applies to the second statement.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let $f:\R^D ightarrow\R$ be an $L$-hidden layer multi-layer perceptron (MLP) such that \[ f(xv)=\sigma_{L+1}ig(\wv^ op\sigma_L(\Wm_L\sigma_{L-1}(\Wm_{L-1}\dots\sigma_1(\Wm_1xv)))ig), \] with $\wv\in\R^{M}$, $\Wm_1\in\R^{M imes D}$ and $\...
Consider a multilayer perceptron (MLP) with one hidden layer and m {\displaystyle m} hidden units with mapping from input x ∈ R d {\displaystyle x\in R^{d}} to a scalar output described as F x ( W ~ , Θ ) = ∑ i = 1 m θ i ϕ ( x T w ~ ( i ) ) {\displaystyle F_{x}({\tilde {W}},\Theta )=\sum _{i=1}^{m}\theta _{i}\phi (x^{T...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let $f:\R^D ightarrow\R$ be an $L$-hidden layer multi-layer perceptron (MLP) such that \[ f(xv)=\sigma_{L+1}ig(\wv^ op\sigma_L(\Wm_L\sigma_{L-1}(\Wm_{L-1}\dots\sigma_1(\Wm_1xv)))ig), \] with $\wv\in\R^{M}$, $\Wm_1\in\R^{M imes D}$ and $\...
Consider a multilayer perceptron (MLP) with one hidden layer and m {\displaystyle m} hidden units with mapping from input x ∈ R d {\displaystyle x\in R^{d}} to a scalar output described as F x ( W ~ , Θ ) = ∑ i = 1 m θ i ϕ ( x T w ~ ( i ) ) {\displaystyle F_{x}({\tilde {W}},\Theta )=\sum _{i=1}^{m}\theta _{i}\phi (x^{T...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
What is the gradient of $\mathbf{x}^{\top} \mathbf{W} \mathbf{x}$ with respect to all entries of $\mathbf{W}$ (written as a matrix)?
In order to find the correct value of w {\displaystyle \mathbf {w} } , we can use gradient descent method. We first of all whiten the data, and transform x {\displaystyle \mathbf {x} } into a new mixture z {\displaystyle \mathbf {z} } , which has unit variance, and z = ( z 1 , z 2 , … , z M ) T {\displaystyle \mathbf {...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
What is the gradient of $\mathbf{x}^{\top} \mathbf{W} \mathbf{x}$ with respect to all entries of $\mathbf{W}$ (written as a matrix)?
In order for W {\displaystyle \mathbf {W} } to be single-valued in configuration space, A {\displaystyle \mathbf {A} } has to be analytic and in order for A {\displaystyle \mathbf {A} } to be analytic (excluding the pathological points), the components of the vector matrix, F {\displaystyle \mathbf {F} } , have to sati...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
We consider now the ridge regression problem: $$ \min _{\mathbf{w} \in \mathbb{R}^{d}} \frac{1}{2 N} \sum_{n=1}^{N}\left[y_{n}-\mathbf{x}_{n}^{\top} \mathbf{w}\right]^{2}+\lambda\|\mathbf{w}\|_{2}^{2}, $$ where the data $\left\{\left(\mathbf{x}_{n}, y_{n}\right)\right\}_{n=1}^{N}$ are such that the feature vector $\mat...
In the simplest case, the problem of a near-singular moment matrix ( X T X ) {\displaystyle (\mathbf {X} ^{\mathsf {T}}\mathbf {X} )} is alleviated by adding positive elements to the diagonals, thereby decreasing its condition number. Analogous to the ordinary least squares estimator, the simple ridge estimator is then...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
We consider now the ridge regression problem: $$ \min _{\mathbf{w} \in \mathbb{R}^{d}} \frac{1}{2 N} \sum_{n=1}^{N}\left[y_{n}-\mathbf{x}_{n}^{\top} \mathbf{w}\right]^{2}+\lambda\|\mathbf{w}\|_{2}^{2}, $$ where the data $\left\{\left(\mathbf{x}_{n}, y_{n}\right)\right\}_{n=1}^{N}$ are such that the feature vector $\mat...
One particularly common choice for the penalty function R {\displaystyle R} is the squared ℓ 2 {\displaystyle \ell _{2}} norm, i.e., R ( w ) = ∑ j = 1 d w j 2 {\displaystyle R(w)=\sum _{j=1}^{d}w_{j}^{2}} 1 n ‖ Y − X ⁡ w ‖ 2 2 + λ ∑ j = 1 d | w j | 2 → min w ∈ R d {\displaystyle {\frac {1}{n}}\|Y-\operatorname {X} w\|_...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Given a joint data distribution $\mathcal D$ on $\mathcal X imes \{-1,1\}$ and $n$ independent and identically distributed observations from $\mathcal D$, the goal of the classification task is to learn a classifier $f:\mathcal X o \{-1,1\}$ with minimum true risk $\mathcal L(f) = \mathbb E_{(X,Y)\sim \mathcal D} [o...
Empirical risk minimization for a classification problem with a 0-1 loss function is known to be an NP-hard problem even for a relatively simple class of functions such as linear classifiers. Nevertheless, it can be solved efficiently when the minimal empirical risk is zero, i.e., data is linearly separable. In practic...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Given a joint data distribution $\mathcal D$ on $\mathcal X imes \{-1,1\}$ and $n$ independent and identically distributed observations from $\mathcal D$, the goal of the classification task is to learn a classifier $f:\mathcal X o \{-1,1\}$ with minimum true risk $\mathcal L(f) = \mathbb E_{(X,Y)\sim \mathcal D} [o...
Empirical risk minimization for a classification problem with a 0-1 loss function is known to be an NP-hard problem even for a relatively simple class of functions such as linear classifiers. Nevertheless, it can be solved efficiently when the minimal empirical risk is zero, i.e., data is linearly separable. In practic...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Given a matrix $\Xm$ of shape $D imes N$ with a singular value decomposition (SVD), $X=USV^ op$, suppose $\Xm$ has rank $K$ and $\Am=\Xm\Xm^ op$. Which one of the following statements is extbf{false}?
In particular, the decomposition can be interpreted as the sum of outer products of each left ( u k {\displaystyle \mathbf {u} _{k}} ) and right ( v k {\displaystyle \mathbf {v} _{k}} ) singular vectors, scaled by the corresponding nonzero singular value σ k {\displaystyle \sigma _{k}}: This result implies that A {\dis...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Given a matrix $\Xm$ of shape $D imes N$ with a singular value decomposition (SVD), $X=USV^ op$, suppose $\Xm$ has rank $K$ and $\Am=\Xm\Xm^ op$. Which one of the following statements is extbf{false}?
In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any m × n {\displaystyle \ m\times n\ } matrix. It is related to the polar decomposition. Specifically, the singular...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Assume we have $N$ training samples $(\xx_1, y_1), \dots, (\xx_N, y_N)$ where for each sample $i \in \{1, \dots, N\}$ we have that $\xx_i \in \R^d$ and $y_i \in \R$. For $\lambda \geq 0$, we consider the following loss: L_{\lambda}(\ww) = rac{1}{N} \sum_{i = 1}^N (y_i - \xx_i^ op \ww)^2 + \lambda \Vert \ww \Ve...
Fix a loss function L: Y × Y → R ≥ 0 {\displaystyle {\mathcal {L}}\colon Y\times Y\to \mathbb {R} _{\geq 0}} , for example, the square loss L ( y , y ′ ) = ( y − y ′ ) 2 {\displaystyle {\mathcal {L}}(y,y')=(y-y')^{2}} , where h ( x ) = y ′ {\displaystyle h(x)=y'} . For a given distribution ρ {\displaystyle \rho } on X ...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Assume we have $N$ training samples $(\xx_1, y_1), \dots, (\xx_N, y_N)$ where for each sample $i \in \{1, \dots, N\}$ we have that $\xx_i \in \R^d$ and $y_i \in \R$. For $\lambda \geq 0$, we consider the following loss: L_{\lambda}(\ww) = rac{1}{N} \sum_{i = 1}^N (y_i - \xx_i^ op \ww)^2 + \lambda \Vert \ww \Ve...
Fix a loss function L: Y × Y → R ≥ 0 {\displaystyle {\mathcal {L}}\colon Y\times Y\to \mathbb {R} _{\geq 0}} , for example, the square loss L ( y , y ′ ) = ( y − y ′ ) 2 {\displaystyle {\mathcal {L}}(y,y')=(y-y')^{2}} , where h ( x ) = y ′ {\displaystyle h(x)=y'} . For a given distribution ρ {\displaystyle \rho } on X ...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In the setting of EM, where $x_{n}$ is the data and $z_{n}$ is the latent variable, what quantity is called the posterior?
The typical models to which EM is applied use Z {\displaystyle \mathbf {Z} } as a latent variable indicating membership in one of a set of groups: The observed data points X {\displaystyle \mathbf {X} } may be discrete (taking values in a finite or countably infinite set) or continuous (taking values in an uncountably ...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In the setting of EM, where $x_{n}$ is the data and $z_{n}$ is the latent variable, what quantity is called the posterior?
However, there are a number of differences. Most important is what is being computed. EM computes point estimates of posterior distribution of those random variables that can be categorized as "parameters", but only estimates of the actual posterior distributions of the latent variables (at least in "soft EM", and ofte...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which statement about extit{black-box} adversarial attacks is true:
Black box attacks in adversarial machine learning assumes that the adversary can only get outputs for provided inputs and has no knowledge of the model structure or parameters. In this case, the adversarial example is generated either using a model created from scratch, or without any model at all (excluding the abilit...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which statement about extit{black-box} adversarial attacks is true:
Black box attacks in adversarial machine learning assumes that the adversary can only get outputs for provided inputs and has no knowledge of the model structure or parameters. In this case, the adversarial example is generated either using a model created from scratch, or without any model at all (excluding the abilit...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You are in $D$-dimensional space and use a KNN classifier with $k=1$. You are given $N$ samples and by running experiments you see that for most random inputs $\mathbf{x}$ you find a nearest sample at distance roughly $\delta$. You would like to decrease this distance to $\delta / 2$. How many samples will you likely n...
There are many results on the error rate of the k nearest neighbour classifiers. The k-nearest neighbour classifier is strongly (that is for any joint distribution on ( X , Y ) {\displaystyle (X,Y)} ) consistent provided k := k n {\displaystyle k:=k_{n}} diverges and k n / n {\displaystyle k_{n}/n} converges to zero as...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You are in $D$-dimensional space and use a KNN classifier with $k=1$. You are given $N$ samples and by running experiments you see that for most random inputs $\mathbf{x}$ you find a nearest sample at distance roughly $\delta$. You would like to decrease this distance to $\delta / 2$. How many samples will you likely n...
A distance matrix is utilized in the k-NN algorithm which is one of the slowest but simplest and most used instance-based machine learning algorithms that can be used both in classification and regression tasks. It is one of the slowest machine learning algorithms since each test sample's predicted result requires a fu...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the function $f(x)=-x^{2}$. Which of the following statements are true regarding subgradients of $f(x)$ at $x=0$ ?
These concepts generalize further to convex functions f: U → R {\displaystyle f:U\to \mathbb {R} } on a convex set in a locally convex space V {\displaystyle V} . A functional v ∗ {\displaystyle v^{*}} in the dual space V ∗ {\displaystyle V^{*}} is called the subgradient at x 0 {\displaystyle x_{0}} in U {\displaystyle...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the function $f(x)=-x^{2}$. Which of the following statements are true regarding subgradients of $f(x)$ at $x=0$ ?
Let f: R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } be a convex function with domain R n {\displaystyle \mathbb {R} ^{n}} . A classical subgradient method iterates x ( k + 1 ) = x ( k ) − α k g ( k ) {\displaystyle x^{(k+1)}=x^{(k)}-\alpha _{k}g^{(k)}\ } where g ( k ) {\displaystyle g^{(k)}} denotes any s...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
K-means can be equivalently written as the following Matrix Factorization $$ \begin{aligned} & \min _{\mathbf{z}, \boldsymbol{\mu}} \mathcal{L}(\mathbf{z}, \boldsymbol{\mu})=\left\|\mathbf{X}-\mathbf{M} \mathbf{Z}^{\top}\right\|_{\text {Frob }}^{2} \\ & \text { s.t. } \boldsymbol{\mu}_{k} \in \mathbb{R}^{D}, \\ & z_{n ...
Given a set of observations (x1, x2, ..., xn), where each observation is a d-dimensional real vector, k-means clustering aims to partition the n observations into k (≤ n) sets S = {S1, S2, ..., Sk} so as to minimize the within-cluster sum of squares (WCSS) (i.e. variance). Formally, the objective is to find: where μi i...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
K-means can be equivalently written as the following Matrix Factorization $$ \begin{aligned} & \min _{\mathbf{z}, \boldsymbol{\mu}} \mathcal{L}(\mathbf{z}, \boldsymbol{\mu})=\left\|\mathbf{X}-\mathbf{M} \mathbf{Z}^{\top}\right\|_{\text {Frob }}^{2} \\ & \text { s.t. } \boldsymbol{\mu}_{k} \in \mathbb{R}^{D}, \\ & z_{n ...
k-means clustering is a method of vector quantization, originally from signal processing, that aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean (cluster centers or cluster centroid), serving as a prototype of the cluster. This results in a partition...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Recall that we say that a kernel $K: \R imes \R ightarrow \R $ is valid if there exists $k \in \mathbb{N}$ and $\Phi: \R ightarrow \R^k$ such that for all $(x, x') \in \R imes \R $, $K(x, x') = \Phi(x)^ op \Phi(x')$. The kernel $K(x, x') = \cos(x + x')$ is a valid kernel.
He therefore defined a continuous real symmetric kernel K ( s , t ) {\displaystyle K(s,t)} to be of positive type (i.e. positive-definite) if J ( x ) ≥ 0 {\displaystyle J(x)\geq 0} for all real continuous functions x {\displaystyle x} on {\displaystyle } , and he proved that (1.1) is a necessary and sufficient conditi...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Recall that we say that a kernel $K: \R imes \R ightarrow \R $ is valid if there exists $k \in \mathbb{N}$ and $\Phi: \R ightarrow \R^k$ such that for all $(x, x') \in \R imes \R $, $K(x, x') = \Phi(x)^ op \Phi(x')$. The kernel $K(x, x') = \cos(x + x')$ is a valid kernel.
The kernel is useful in classifying properties of prefilters and other families of sets. If B ⊆ ℘ ( X ) {\displaystyle {\mathcal {B}}\subseteq \wp (X)} then for any point x , x ∉ ker ⁡ B if and only if X ∖ { x } ∈ B ↑ X . {\displaystyle x,x\not \in \ker {\mathcal {B}}{\text{ if and only if }}X\setminus \{x\}\in {\mathc...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
(Adversarial perturbations for linear models) Suppose you are given a linear classifier with the logistic loss. Is it true that generating the optimal adversarial perturbations by maximizing the loss under the $\ell_{2}$-norm constraint on the perturbation is an NP-hard optimization problem?
In an effort to analyze existing adversarial attacks and defenses, researchers at the University of California, Berkeley, Nicholas Carlini and David Wagner in 2016 propose a faster and more robust method to generate adversarial examples.The attack proposed by Carlini and Wagner begins with trying to solve a difficult n...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
(Adversarial perturbations for linear models) Suppose you are given a linear classifier with the logistic loss. Is it true that generating the optimal adversarial perturbations by maximizing the loss under the $\ell_{2}$-norm constraint on the perturbation is an NP-hard optimization problem?
In an effort to analyze existing adversarial attacks and defenses, researchers at the University of California, Berkeley, Nicholas Carlini and David Wagner in 2016 propose a faster and more robust method to generate adversarial examples.The attack proposed by Carlini and Wagner begins with trying to solve a difficult n...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a binary classification problem with a linear classifier $f(\mathbf{x})$ given by $$ f(\mathbf{x})= \begin{cases}1, & \mathbf{w}^{\top} \mathbf{x} \geq 0 \\ -1, & \mathbf{w}^{\top} \mathbf{x}<0\end{cases} $$ where $\mathbf{x} \in \mathbb{R}^{3}$. Suppose that the weights of the linear model are equal to $\math...
Utilizing Bayes' theorem, it can be shown that the optimal f 0 / 1 ∗ {\displaystyle f_{0/1}^{*}} , i.e., the one that minimizes the expected risk associated with the zero-one loss, implements the Bayes optimal decision rule for a binary classification problem and is in the form of f 0 / 1 ∗ ( x → ) = { 1 if p ( 1 ∣ x →...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a binary classification problem with a linear classifier $f(\mathbf{x})$ given by $$ f(\mathbf{x})= \begin{cases}1, & \mathbf{w}^{\top} \mathbf{x} \geq 0 \\ -1, & \mathbf{w}^{\top} \mathbf{x}<0\end{cases} $$ where $\mathbf{x} \in \mathbb{R}^{3}$. Suppose that the weights of the linear model are equal to $\math...
Apply the feature to each image in the training set, then find the optimal threshold and polarity θ j , s j {\displaystyle \theta _{j},s_{j}} that minimizes the weighted classification error. That is θ j , s j = arg ⁡ min θ , s ∑ i = 1 N w j i ε j i {\displaystyle \theta _{j},s_{j}=\arg \min _{\theta ,s}\;\sum _{i=1}^{...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
(Linear Regression) You are given samples $\mathcal{S}=\left\{\left(\mathbf{x}_{n}, y_{n}\right)\right\}_{n=1}^{N}$ where $\mathbf{x}_{n} \in \mathbb{R}^{D}$ and $y_{n}$ are scalar values. You are solving linear regression using normal equations. You will always find the optimal weights with 0 training error in case of...
It is implicit in the above treatment that the data points are all given equal weight. Technically, the objective function U = ∑ i w i ( Y i − y i ) 2 {\displaystyle U=\sum _{i}w_{i}(Y_{i}-y_{i})^{2}} being minimized in the least-squares process has unit weights, wi = 1. When weights are not all the same the normal equ...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
(Linear Regression) You are given samples $\mathcal{S}=\left\{\left(\mathbf{x}_{n}, y_{n}\right)\right\}_{n=1}^{N}$ where $\mathbf{x}_{n} \in \mathbb{R}^{D}$ and $y_{n}$ are scalar values. You are solving linear regression using normal equations. You will always find the optimal weights with 0 training error in case of...
An estimating equation motivated by multivariate linear regression is where r X Y ( s , t ) = cov ( X ( s ) , Y ( t ) ) {\displaystyle r_{XY}(s,t)={\text{cov}}(X(s),Y(t))} , R X X: L 2 ( S × S ) → L 2 ( S × T ) {\displaystyle R_{XX}:L^{2}({\mathcal {S}}\times {\mathcal {S}})\rightarrow L^{2}({\mathcal {S}}\times {\math...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following matrix-factorization problem. For the observed ratings $r_{u m}$ for a given pair $(u, m)$ of a user $u$ and a movie $m$, one typically tries to estimate the score by $$ f_{u m}=\left\langle\mathbf{v}_{u}, \mathbf{w}_{m}\right\rangle+b_{u}+b_{m} $$ Here $\mathbf{v}_{u}$ and $\mathbf{w}_{m}$ are v...
Suppose S i ∼ W p ( n i , Σ ) , i = 1 , … , r + 1 {\displaystyle S_{i}\sim W_{p}\left(n_{i},\Sigma \right),i=1,\ldots ,r+1} are independently distributed Wishart p × p {\displaystyle p\times p} positive definite matrices. Then, defining U i = S − 1 / 2 S i ( S − 1 / 2 ) T {\displaystyle U_{i}=S^{-1/2}S_{i}\left(S^{-1/2...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following matrix-factorization problem. For the observed ratings $r_{u m}$ for a given pair $(u, m)$ of a user $u$ and a movie $m$, one typically tries to estimate the score by $$ f_{u m}=\left\langle\mathbf{v}_{u}, \mathbf{w}_{m}\right\rangle+b_{u}+b_{m} $$ Here $\mathbf{v}_{u}$ and $\mathbf{w}_{m}$ are v...
The solution to the problem is given by first computing a singular value decomposition of E e s t {\displaystyle \mathbf {E} _{\rm {est}}}: E e s t = U S V T {\displaystyle \mathbf {E} _{\rm {est}}=\mathbf {U} \,\mathbf {S} \,\mathbf {V} ^{T}} where U , V {\displaystyle \mathbf {U} ,\mathbf {V} } are orthogonal matrice...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following joint distribution that has the factorization $$ p\left(x_{1}, x_{2}, x_{3}, x_{4}, x_{5}\right)=p\left(x_{1}\right) p\left(x_{2} \mid x_{1}\right) p\left(x_{3} \mid x_{2}\right) p\left(x_{4} \mid x_{1}, x_{3}\right) p\left(x_{5} \mid x_{4}\right) . $$ : (4 points.) Determine whether the followin...
Suppose S i ∼ W p ( n i , Σ ) , i = 1 , … , r + 1 {\displaystyle S_{i}\sim W_{p}\left(n_{i},\Sigma \right),i=1,\ldots ,r+1} are independently distributed Wishart p × p {\displaystyle p\times p} positive definite matrices. Then, defining U i = S − 1 / 2 S i ( S − 1 / 2 ) T {\displaystyle U_{i}=S^{-1/2}S_{i}\left(S^{-1/2...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following joint distribution that has the factorization $$ p\left(x_{1}, x_{2}, x_{3}, x_{4}, x_{5}\right)=p\left(x_{1}\right) p\left(x_{2} \mid x_{1}\right) p\left(x_{3} \mid x_{2}\right) p\left(x_{4} \mid x_{1}, x_{3}\right) p\left(x_{5} \mid x_{4}\right) . $$ : (4 points.) Determine whether the followin...
{\displaystyle p({\bf {y}},\theta |{\bf {x}})\;=\;p({\bf {y}}|{\bf {x}},\theta )p(\theta )\;=\;p({\bf {y}}|{\bf {x}})p(\theta |{\bf {y}},{\bf {x}})\;\simeq \;{\tilde {q}}(\theta )\;=\;Zq(\theta ).} The joint is equal to the product of the likelihood and the prior and by Bayes' rule, equal to the product of the marginal...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The purpose of this first exercise part is to ensure that the predictions produced by minimizing the true $\phi$-risk are optimal. As for the $0-1$ loss, it can be shown that the true $\phi$-risk is minimized at a predictor $g^\star:\mathcal X o \R$ satisfying for all $\xv\in\mathcal X$: For any function $g:\...
Utilizing Bayes' theorem, it can be shown that the optimal f 0 / 1 ∗ {\displaystyle f_{0/1}^{*}} , i.e., the one that minimizes the expected risk associated with the zero-one loss, implements the Bayes optimal decision rule for a binary classification problem and is in the form of f 0 / 1 ∗ ( x → ) = { 1 if p ( 1 ∣ x →...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The purpose of this first exercise part is to ensure that the predictions produced by minimizing the true $\phi$-risk are optimal. As for the $0-1$ loss, it can be shown that the true $\phi$-risk is minimized at a predictor $g^\star:\mathcal X o \R$ satisfying for all $\xv\in\mathcal X$: For any function $g:\...
that minimizes the expected loss. This is known as a generalized Bayes rule with respect to π ( θ ) {\displaystyle \pi (\theta )\,\!} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In Text Representation learning, which of the following statements are correct?
The second is training on the representation similarity for neighboring words and representation dissimilarity for random pairs of words. A limitation of word2vec is that only the pairwise co-occurrence structure of the data is used, and not the ordering or entire set of context words. More recent transformer-based rep...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In Text Representation learning, which of the following statements are correct?
The second is training on the representation similarity for neighboring words and representation dissimilarity for random pairs of words. A limitation of word2vec is that only the pairwise co-occurrence structure of the data is used, and not the ordering or entire set of context words. More recent transformer-based rep...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In the following let $\kappa_{1}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ and $\kappa_{2}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ be two valid kernels. Show that the following is also valid kernel: $\kappa\left(\mathbf{x}, \mathbf{x}^{\prime}\right)=\kappa_{1}\left(\mathbf{x}, \mathbf{x}^{\prime}\right) \kappa...
Let κ 1 {\displaystyle \kappa ^{1}} be a s-finite kernel from S {\displaystyle S} to T {\displaystyle T} and κ 2 {\displaystyle \kappa ^{2}} a s-finite kernel from S × T {\displaystyle S\times T} to U {\displaystyle U} . Then the composition κ 1 ⋅ κ 2 {\displaystyle \kappa ^{1}\cdot \kappa ^{2}} of the two kernels is d...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In the following let $\kappa_{1}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ and $\kappa_{2}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ be two valid kernels. Show that the following is also valid kernel: $\kappa\left(\mathbf{x}, \mathbf{x}^{\prime}\right)=\kappa_{1}\left(\mathbf{x}, \mathbf{x}^{\prime}\right) \kappa...
For N {\displaystyle N} even, we define the Dirichlet kernel as D ( x , N ) = 1 N + 1 N cos ⁡ 1 2 N x + 2 N ∑ k = 1 ( N − 1 ) / 2 cos ⁡ ( k x ) = sin ⁡ 1 2 N x N tan ⁡ 1 2 x . {\displaystyle D(x,N)={\frac {1}{N}}+{\frac {1}{N}}\cos {\tfrac {1}{2}}Nx+{\frac {2}{N}}\sum _{k=1}^{(N-1)/2}\cos(kx)={\frac {\sin {\tfrac {1}{2...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
(Robustness) The $l_{1}$ loss is less sensitive to outliers than $l_{2}$.
Also whereas the distribution of the trimmed mean appears to be close to normal, the distribution of the raw mean is quite skewed to the left. So, in this sample of 66 observations, only 2 outliers cause the central limit theorem to be inapplicable. Robust statistical methods, of which the trimmed mean is a simple exam...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
(Robustness) The $l_{1}$ loss is less sensitive to outliers than $l_{2}$.
Another approach is using negentropy instead of kurtosis. Using negentropy is a more robust method than kurtosis, as kurtosis is very sensitive to outliers.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider optimizing a matrix factorization $\boldsymbol{W} \boldsymbol{Z}^{\top}$ in the matrix completion setting, for $\boldsymbol{W} \in \mathbb{R}^{D \times K}$ and $\boldsymbol{Z} \in \mathbb{R}{ }^{N \times K}$. We write $\Omega$ for the set of observed matrix entries. Which of the following statements are correc...
In the problem of matrix completion, the matrix X i t {\displaystyle X_{i}^{t}} takes the form X i t = e t ⊗ e i ′ , {\displaystyle X_{i}^{t}=e_{t}\otimes e_{i}',} where ( e t ) t {\displaystyle (e_{t})_{t}} and ( e i ′ ) i {\displaystyle (e_{i}')_{i}} are the canonical basis in R T {\displaystyle \mathbb {R} ^{T}} and...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider optimizing a matrix factorization $\boldsymbol{W} \boldsymbol{Z}^{\top}$ in the matrix completion setting, for $\boldsymbol{W} \in \mathbb{R}^{D \times K}$ and $\boldsymbol{Z} \in \mathbb{R}{ }^{N \times K}$. We write $\Omega$ for the set of observed matrix entries. Which of the following statements are correc...
Keshavan, Montanari and Oh consider a variant of matrix completion where the rank of the m {\displaystyle m} by n {\displaystyle n} matrix M {\displaystyle M} , which is to be recovered, is known to be r {\displaystyle r} . They assume Bernoulli sampling of entries, constant aspect ratio m n {\displaystyle {\frac {m}{n...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a binary classification problem with classifier $f(\mathbf{x})$ given by $$ f(\mathbf{x})= \begin{cases}1, & g(\mathbf{x}) \geq 0 \\ -1, & g(\mathbf{x})<0\end{cases} $$ and $\mathbf{x} \in \mathbb{R}^{6}$. Consider a specific pair $(\mathbf{x}, y=1)$ and assume that $g(\mathbf{x})=8$. In particular this means ...
This addresses the question whether there is a systematic way to find a positive number β ( x , p ) {\displaystyle \beta (\mathbf {x} ,\mathbf {p} )} - depending on the function f, the point x {\displaystyle \mathbf {x} } and the descent direction p {\displaystyle \mathbf {p} } - so that all learning rates α ≤ β ( x , ...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a binary classification problem with classifier $f(\mathbf{x})$ given by $$ f(\mathbf{x})= \begin{cases}1, & g(\mathbf{x}) \geq 0 \\ -1, & g(\mathbf{x})<0\end{cases} $$ and $\mathbf{x} \in \mathbb{R}^{6}$. Consider a specific pair $(\mathbf{x}, y=1)$ and assume that $g(\mathbf{x})=8$. In particular this means ...
Often f is a threshold function, which maps all values of w → ⋅ x → {\displaystyle {\vec {w}}\cdot {\vec {x}}} above a certain threshold to the first class and all other values to the second class; e.g., f ( x ) = { 1 if w T ⋅ x > θ , 0 otherwise {\displaystyle f(\mathbf {x} )={\begin{cases}1&{\text{if }}\ \mathbf {w} ...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
A neural network has been trained for multi-class classification using cross-entropy but has not necessarily achieved a global or local minimum on the training set. The output of the neural network is $\mathbf{z}=[z_1,\ldots,z_d]^ op$ obtained from the penultimate values $\mathbf{x}=[x_1,\ldots,x_d]^ op$ via softmax $...
Multiclass cross-entropy compares the observed multiclass output with the predicted probabilities. For a random sample of multiclass outcomes of size n {\displaystyle n} , the average multiclass cross-entropy C ¯ {\displaystyle {\overline {C}}} for hyperbolastic H1 or H2 can be estimated by C ¯ = − 1 n ∑ i = 1 n ∑ j = ...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
A neural network has been trained for multi-class classification using cross-entropy but has not necessarily achieved a global or local minimum on the training set. The output of the neural network is $\mathbf{z}=[z_1,\ldots,z_d]^ op$ obtained from the penultimate values $\mathbf{x}=[x_1,\ldots,x_d]^ op$ via softmax $...
The standard softmax function is often used in the final layer of a neural network-based classifier. Such networks are commonly trained under a log loss (or cross-entropy) regime, giving a non-linear variant of multinomial logistic regression. Since the function maps a vector and a specific index i {\displaystyle i} to...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Assume that we have a convolutional neural net with $L$ layers, $K$ nodes per layer, and where each node is connected to $k$ nodes in a previous layer. We ignore in the sequel the question of how we deal with the points at the boundary and assume that $k<<<K$ (much, much, much smaller). How does the complexity of the b...
When dealing with high-dimensional inputs such as images, it is impractical to connect neurons to all neurons in the previous volume because such a network architecture does not take the spatial structure of the data into account. Convolutional networks exploit spatially local correlation by enforcing a sparse local co...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Assume that we have a convolutional neural net with $L$ layers, $K$ nodes per layer, and where each node is connected to $k$ nodes in a previous layer. We ignore in the sequel the question of how we deal with the points at the boundary and assume that $k<<<K$ (much, much, much smaller). How does the complexity of the b...
Recall the forward pass of the convolutional neural network model, used in both training and inference steps. Let x ∈ R M m 1 {\textstyle \mathbf {x} \in \mathbb {R} ^{Mm_{1}}} be its input and W k ∈ R N × m 1 {\textstyle \mathbf {W} _{k}\in \mathbb {R} ^{N\times m_{1}}} the filters at layer k {\textstyle k} , which ar...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Matrix Factorizations: If we compare SGD vs ALS for optimizing a matrix factorization of a $D \times N$ matrix, for large $D, N$
Special algorithms have been developed for factorizing large sparse matrices. These algorithms attempt to find sparse factors L and U. Ideally, the cost of computation is determined by the number of nonzero entries, rather than by the size of the matrix. These algorithms use the freedom to exchange rows and columns to ...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Matrix Factorizations: If we compare SGD vs ALS for optimizing a matrix factorization of a $D \times N$ matrix, for large $D, N$
Reference paper "The Quadratic Sieve Factoring Algorithm" by Eric Landquist
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the logistic regression loss $L: \R^d o \R$ for a binary classification task with data $\left( \xv_i, y_i ight) \in \R^d imes \{0, 1\}$ for $i \in \left\{ 1, \ldots N ight\}$: egin{equation*} L(\wv) = rac{1}{N} \sum_{i = 1}^N igg(\log\left(1 + e^{\xv_i^ op\wv} ight) - y_i\xv_i^ op\wv igg). ...
In machine learning applications where logistic regression is used for binary classification, the MLE minimises the cross-entropy loss function. Logistic regression is an important machine learning algorithm. The goal is to model the probability of a random variable Y {\displaystyle Y} being 0 or 1 given experimental d...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the logistic regression loss $L: \R^d o \R$ for a binary classification task with data $\left( \xv_i, y_i ight) \in \R^d imes \{0, 1\}$ for $i \in \left\{ 1, \ldots N ight\}$: egin{equation*} L(\wv) = rac{1}{N} \sum_{i = 1}^N igg(\log\left(1 + e^{\xv_i^ op\wv} ight) - y_i\xv_i^ op\wv igg). ...
For proper loss functions, the loss margin can be defined as μ ϕ = − ϕ ′ ( 0 ) ϕ ″ ( 0 ) {\displaystyle \mu _{\phi }=-{\frac {\phi '(0)}{\phi ''(0)}}} and shown to be directly related to the regularization properties of the classifier. Specifically a loss function of larger margin increases regularization and produces ...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
When constructing a word embedding, what is true regarding negative samples?
In natural language processing (NLP), a word embedding is a representation of a word. The embedding is used in text analysis. Typically, the representation is a real-valued vector that encodes the meaning of the word in such a way that words that are closer in the vector space are expected to be similar in meaning. Wor...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
When constructing a word embedding, what is true regarding negative samples?
Word embeddings may contain the biases and stereotypes contained in the trained dataset, as Bolukbasi et al. points out in the 2016 paper “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings” that a publicly available (and popular) word2vec embedding trained on Google News texts (a commonl...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
If the first column of matrix L is (0,1,1,1) and all other entries are 0 then the authority values
7. Suppose a matrix has 0-( ± {\displaystyle \pm } 1) entries and in each column, the entries are non-decreasing from top to bottom (so all −1s are on top, then 0s, then 1s are on the bottom).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
If the first column of matrix L is (0,1,1,1) and all other entries are 0 then the authority values
If every eigenvalue of A is less than 1 in absolute value, det ( I + A ) = ∑ k = 0 ∞ 1 k ! ( − ∑ j = 1 ∞ ( − 1 ) j j tr ⁡ ( A j ) ) k , {\displaystyle \det(I+A)=\sum _{k=0}^{\infty }{\frac {1}{k! }}\left(-\sum _{j=1}^{\infty }{\frac {(-1)^{j}}{j}}\operatorname {tr} \left(A^{j}\right)\right)^{k}\,,} where I is the ident...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
If the top 100 documents contain 50 relevant documents
These may consist of an entire document or a document fragment.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
If the top 100 documents contain 50 relevant documents
50, Bs. 100 and Bs. 500.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Following the notation used in class, let us denote the set of terms by $T=\{k_i|i=1,...,m\}$, the set of documents by $D=\{d_j |j=1,...,n\}$, and let $d_i=(w_{1j},w_{2j},...,w_{mj})$. We are also given a query $q=(w_{1q},w_{2q},...,w_{mq})$. In the lecture we studied that, $sim(q,d_j) = \sum^m_{i=1} \frac{w_{ij}}{|...
Similarities are computed as probabilities that a document is relevant for a given query. Probabilistic theorems like the Bayes' theorem are often used in these models. Binary Independence Model Probabilistic relevance model on which is based the okapi (BM25) relevance function Uncertain inference Language models Diver...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Following the notation used in class, let us denote the set of terms by $T=\{k_i|i=1,...,m\}$, the set of documents by $D=\{d_j |j=1,...,n\}$, and let $d_i=(w_{1j},w_{2j},...,w_{mj})$. We are also given a query $q=(w_{1q},w_{2q},...,w_{mq})$. In the lecture we studied that, $sim(q,d_j) = \sum^m_{i=1} \frac{w_{ij}}{|...
Zhao and Callan (2010) were perhaps the first to quantitatively study the vocabulary mismatch problem in a retrieval setting. Their results show that an average query term fails to appear in 30-40% of the documents that are relevant to the user query. They also showed that this probability of mismatch is a central prob...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
What is WRONG regarding the Transformer model?
With no change in flux there is no back E.M.F. and hence no reflected impedance. The transformer and valve combination then generate large 3rd order harmonics.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
What is WRONG regarding the Transformer model?
The ideal transformer model neglects many basic linear aspects of real transformers, including unavoidable losses and inefficiencies. (a) Core losses, collectively called magnetizing current losses, consisting of Hysteresis losses due to nonlinear magnetic effects in the transformer core, and Eddy current losses due to...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Implement the F1-score to evaluate your classifier.
You ran a classification on the same dataset which led to the following values for the confusion matrix categories: TP = 90, FP = 4; TN = 1, FN = 5.In this example, the classifier has performed well in classifying positive instances, but was not able to correctly recognize negative data elements. Again, the resulting F...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Implement the F1-score to evaluate your classifier.
In the example above, the MCC score would be undefined (since TN and FN would be 0, therefore the denominator of Equation 3 would be 0). By checking this value, instead of accuracy and F1 score, you would then be able to notice that your classifier is going in the wrong direction, and you would become aware that there ...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following statements about index merging (when constructing inverted files) is correct?
The inverted index is filled via a merge or rebuild. A rebuild is similar to a merge but first deletes the contents of the inverted index. The architecture may be designed to support incremental indexing, where a merge identifies the document or documents to be added or updated and then parses each document into words....
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following statements about index merging (when constructing inverted files) is correct?
The inverted index is filled via a merge or rebuild. A rebuild is similar to a merge but first deletes the contents of the inverted index. The architecture may be designed to support incremental indexing, where a merge identifies the document or documents to be added or updated and then parses each document into words....
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following statements on Latent Semantic Indexing (LSI) and Word Embeddings (WE) is false?
Latent semantic indexing (LSI) is an indexing and retrieval method that uses a mathematical technique called singular value decomposition (SVD) to identify patterns in the relationships between the terms and concepts contained in an unstructured collection of text. LSI is based on the principle that words that are used...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following statements on Latent Semantic Indexing (LSI) and Word Embeddings (WE) is false?
During the experiment, semantic associations remain fixed showing the assumption that semantic associations are not significantly impacted by the episodic experience of one experiment. The two measures used to measure semantic relatedness in this model are latent semantic analysis (LSA) and word association spaces (WAS...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The number of non-zero entries in a column of a term-document matrix indicates:
When creating a data-set of terms that appear in a corpus of documents, the document-term matrix contains rows corresponding to the documents and columns corresponding to the terms. Each ij cell, then, is the number of times word j occurs in document i. As such, each row is a vector of term counts that represents the c...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The number of non-zero entries in a column of a term-document matrix indicates:
In text databases, a document collection defined by a document by term D matrix (of size m×n, where m is the number of documents and n is the number of terms), the number of clusters can roughly be estimated by the formula m n t {\displaystyle {\tfrac {mn}{t}}} where t is the number of non-zero entries in D. Note that ...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following statements on Latent Semantic Indexing (LSI) and Word Embeddings (WE) is incorrect
Latent semantic indexing (LSI) is an indexing and retrieval method that uses a mathematical technique called singular value decomposition (SVD) to identify patterns in the relationships between the terms and concepts contained in an unstructured collection of text. LSI is based on the principle that words that are used...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following statements on Latent Semantic Indexing (LSI) and Word Embeddings (WE) is incorrect
Latent semantic analysis (LSA, performing singular-value decomposition on the document-term matrix) can improve search results by disambiguating polysemous words and searching for synonyms of the query. However, searching in the high-dimensional continuous space is much slower than searching the standard trie data stru...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Suppose that in a given FP Tree, an item in a leaf node N exists in every path. Which of the following is true?
If every node of a tree has finitely many successors, then it is called a finitely, otherwise an infinitely branching tree. A path π is a subset of T such that ε ∈ π and for every t ∈ T, either t is a leaf or there exists a unique c ∈ N {\displaystyle \mathbb {N} } such that t.c ∈ π. A path may be a finite or infinite ...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Suppose that in a given FP Tree, an item in a leaf node N exists in every path. Which of the following is true?
The root is (s,0) and parent of a node (q,j) is (predecessor(q,j), j-1). This tree is infinite, finitely branching, and fully connected. Therefore, by Kőnig's lemma, there exists an infinite path (q0,0),(q1,1),(q2,2),... in the tree. Therefore, following is an accepting run of A run(q0,0)⋅run(q1,1)⋅run(q2,2)⋅...Hence, ...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following statements regarding topic models is false?
In statistics and natural language processing, a topic model is a type of statistical model for discovering the abstract "topics" that occur in a collection of documents. Topic modeling is a frequently used text-mining tool for discovery of hidden semantic structures in a text body. Intuitively, given that a document i...
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following statements regarding topic models is false?
In particular, a larger number of academics are concerned about how some topic modeling techniques can hardly be validated. Random Samples. On the one hand, it is extremely hard to know how many units of one type of texts (for example blogposts) are in a certain time in the Internet.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Modularity of a social network always:
Social Networks. 35 (4): 626–638. doi:10.1016/j.socnet.2013.08.004.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Modularity of a social network always:
"Dynamic Social Networks Promote Cooperation in Experiments with Humans". Proceedings of the National Academy of Sciences. 108 (48): 19193–8.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus